00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v23.11" build number 601 00:00:00.001 originally caused by: 00:00:00.002 Started by upstream project "nightly-trigger" build number 3267 00:00:00.002 originally caused by: 00:00:00.002 Started by timer 00:00:00.091 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.091 The recommended git tool is: git 00:00:00.092 using credential 00000000-0000-0000-0000-000000000002 00:00:00.093 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.115 Fetching changes from the remote Git repository 00:00:00.118 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.146 Using shallow fetch with depth 1 00:00:00.146 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.146 > git --version # timeout=10 00:00:00.174 > git --version # 'git version 2.39.2' 00:00:00.174 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.188 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.188 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.400 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.411 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.423 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:04.423 > git config core.sparsecheckout # timeout=10 00:00:04.434 > git read-tree -mu HEAD # timeout=10 00:00:04.452 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:04.473 Commit message: "inventory: add WCP3 to free inventory" 00:00:04.473 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:04.560 [Pipeline] Start of Pipeline 00:00:04.574 [Pipeline] library 00:00:04.576 Loading library shm_lib@master 00:00:04.576 Library shm_lib@master is cached. Copying from home. 00:00:04.593 [Pipeline] node 00:00:04.602 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:00:04.603 [Pipeline] { 00:00:04.615 [Pipeline] catchError 00:00:04.616 [Pipeline] { 00:00:04.627 [Pipeline] wrap 00:00:04.637 [Pipeline] { 00:00:04.645 [Pipeline] stage 00:00:04.646 [Pipeline] { (Prologue) 00:00:04.667 [Pipeline] echo 00:00:04.669 Node: VM-host-SM0 00:00:04.675 [Pipeline] cleanWs 00:00:04.685 [WS-CLEANUP] Deleting project workspace... 00:00:04.685 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.694 [WS-CLEANUP] done 00:00:04.894 [Pipeline] setCustomBuildProperty 00:00:04.995 [Pipeline] httpRequest 00:00:05.022 [Pipeline] echo 00:00:05.024 Sorcerer 10.211.164.101 is alive 00:00:05.034 [Pipeline] httpRequest 00:00:05.039 HttpMethod: GET 00:00:05.039 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.040 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:05.054 Response Code: HTTP/1.1 200 OK 00:00:05.055 Success: Status code 200 is in the accepted range: 200,404 00:00:05.055 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:11.501 [Pipeline] sh 00:00:11.784 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:11.803 [Pipeline] httpRequest 00:00:11.833 [Pipeline] echo 00:00:11.835 Sorcerer 10.211.164.101 is alive 00:00:11.844 [Pipeline] httpRequest 00:00:11.848 HttpMethod: GET 00:00:11.849 URL: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:11.849 Sending request to url: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:11.866 Response Code: HTTP/1.1 200 OK 00:00:11.866 Success: Status code 200 is in the accepted range: 200,404 00:00:11.867 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:54.631 [Pipeline] sh 00:00:54.911 + tar --no-same-owner -xf spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:57.456 [Pipeline] sh 00:00:57.738 + git -C spdk log --oneline -n5 00:00:57.738 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:00:57.738 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:00:57.738 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:00:57.738 e03c164a1 nvme: add nvme_ctrlr_lock 00:00:57.738 d61f89a86 nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:00:57.761 [Pipeline] withCredentials 00:00:57.772 > git --version # timeout=10 00:00:57.785 > git --version # 'git version 2.39.2' 00:00:57.801 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:57.803 [Pipeline] { 00:00:57.816 [Pipeline] retry 00:00:57.818 [Pipeline] { 00:00:57.838 [Pipeline] sh 00:00:58.118 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:00:59.063 [Pipeline] } 00:00:59.085 [Pipeline] // retry 00:00:59.090 [Pipeline] } 00:00:59.111 [Pipeline] // withCredentials 00:00:59.122 [Pipeline] httpRequest 00:00:59.139 [Pipeline] echo 00:00:59.141 Sorcerer 10.211.164.101 is alive 00:00:59.150 [Pipeline] httpRequest 00:00:59.155 HttpMethod: GET 00:00:59.155 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:59.156 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:00:59.166 Response Code: HTTP/1.1 200 OK 00:00:59.167 Success: Status code 200 is in the accepted range: 200,404 00:00:59.167 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:09.902 [Pipeline] sh 00:01:10.184 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:11.572 [Pipeline] sh 00:01:11.850 + git -C dpdk log --oneline -n5 00:01:11.850 eeb0605f11 version: 23.11.0 00:01:11.850 238778122a doc: update release notes for 23.11 00:01:11.850 46aa6b3cfc doc: fix description of RSS features 00:01:11.850 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:11.850 7e421ae345 devtools: support skipping forbid rule check 00:01:11.866 [Pipeline] writeFile 00:01:11.880 [Pipeline] sh 00:01:12.154 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:12.168 [Pipeline] sh 00:01:12.445 + cat autorun-spdk.conf 00:01:12.445 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.445 SPDK_TEST_NVMF=1 00:01:12.445 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.445 SPDK_TEST_USDT=1 00:01:12.445 SPDK_RUN_UBSAN=1 00:01:12.445 SPDK_TEST_NVMF_MDNS=1 00:01:12.445 NET_TYPE=virt 00:01:12.445 SPDK_JSONRPC_GO_CLIENT=1 00:01:12.445 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:12.445 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:12.445 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:12.452 RUN_NIGHTLY=1 00:01:12.455 [Pipeline] } 00:01:12.475 [Pipeline] // stage 00:01:12.488 [Pipeline] stage 00:01:12.490 [Pipeline] { (Run VM) 00:01:12.501 [Pipeline] sh 00:01:12.809 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:12.809 + echo 'Start stage prepare_nvme.sh' 00:01:12.809 Start stage prepare_nvme.sh 00:01:12.809 + [[ -n 4 ]] 00:01:12.809 + disk_prefix=ex4 00:01:12.809 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:01:12.809 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:01:12.809 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:01:12.809 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.809 ++ SPDK_TEST_NVMF=1 00:01:12.809 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.809 ++ SPDK_TEST_USDT=1 00:01:12.809 ++ SPDK_RUN_UBSAN=1 00:01:12.809 ++ SPDK_TEST_NVMF_MDNS=1 00:01:12.809 ++ NET_TYPE=virt 00:01:12.809 ++ SPDK_JSONRPC_GO_CLIENT=1 00:01:12.809 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:12.809 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:12.809 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:12.809 ++ RUN_NIGHTLY=1 00:01:12.809 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:12.809 + nvme_files=() 00:01:12.809 + declare -A nvme_files 00:01:12.809 + backend_dir=/var/lib/libvirt/images/backends 00:01:12.809 + nvme_files['nvme.img']=5G 00:01:12.809 + nvme_files['nvme-cmb.img']=5G 00:01:12.809 + nvme_files['nvme-multi0.img']=4G 00:01:12.809 + nvme_files['nvme-multi1.img']=4G 00:01:12.809 + nvme_files['nvme-multi2.img']=4G 00:01:12.809 + nvme_files['nvme-openstack.img']=8G 00:01:12.809 + nvme_files['nvme-zns.img']=5G 00:01:12.809 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:12.809 + (( SPDK_TEST_FTL == 1 )) 00:01:12.809 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:12.809 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:12.809 + for nvme in "${!nvme_files[@]}" 00:01:12.809 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:01:12.809 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:12.809 + for nvme in "${!nvme_files[@]}" 00:01:12.809 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:01:12.809 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:12.809 + for nvme in "${!nvme_files[@]}" 00:01:12.809 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:01:12.809 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:12.809 + for nvme in "${!nvme_files[@]}" 00:01:12.809 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:01:12.809 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:12.809 + for nvme in "${!nvme_files[@]}" 00:01:12.809 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:01:12.809 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:12.809 + for nvme in "${!nvme_files[@]}" 00:01:12.809 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:01:12.809 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:12.809 + for nvme in "${!nvme_files[@]}" 00:01:12.809 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:01:13.066 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:13.066 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:01:13.066 + echo 'End stage prepare_nvme.sh' 00:01:13.066 End stage prepare_nvme.sh 00:01:13.078 [Pipeline] sh 00:01:13.355 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:13.355 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora38 00:01:13.355 00:01:13.355 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:01:13.355 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:01:13.355 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:13.355 HELP=0 00:01:13.355 DRY_RUN=0 00:01:13.355 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:01:13.355 NVME_DISKS_TYPE=nvme,nvme, 00:01:13.355 NVME_AUTO_CREATE=0 00:01:13.355 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:01:13.355 NVME_CMB=,, 00:01:13.355 NVME_PMR=,, 00:01:13.355 NVME_ZNS=,, 00:01:13.355 NVME_MS=,, 00:01:13.355 NVME_FDP=,, 00:01:13.355 SPDK_VAGRANT_DISTRO=fedora38 00:01:13.356 SPDK_VAGRANT_VMCPU=10 00:01:13.356 SPDK_VAGRANT_VMRAM=12288 00:01:13.356 SPDK_VAGRANT_PROVIDER=libvirt 00:01:13.356 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:13.356 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:13.356 SPDK_OPENSTACK_NETWORK=0 00:01:13.356 VAGRANT_PACKAGE_BOX=0 00:01:13.356 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:13.356 FORCE_DISTRO=true 00:01:13.356 VAGRANT_BOX_VERSION= 00:01:13.356 EXTRA_VAGRANTFILES= 00:01:13.356 NIC_MODEL=e1000 00:01:13.356 00:01:13.356 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:01:13.356 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:01:16.637 Bringing machine 'default' up with 'libvirt' provider... 00:01:16.896 ==> default: Creating image (snapshot of base box volume). 00:01:17.166 ==> default: Creating domain with the following settings... 00:01:17.166 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720950827_734d47a6f2afe03ecacc 00:01:17.166 ==> default: -- Domain type: kvm 00:01:17.166 ==> default: -- Cpus: 10 00:01:17.166 ==> default: -- Feature: acpi 00:01:17.166 ==> default: -- Feature: apic 00:01:17.166 ==> default: -- Feature: pae 00:01:17.166 ==> default: -- Memory: 12288M 00:01:17.166 ==> default: -- Memory Backing: hugepages: 00:01:17.166 ==> default: -- Management MAC: 00:01:17.166 ==> default: -- Loader: 00:01:17.166 ==> default: -- Nvram: 00:01:17.166 ==> default: -- Base box: spdk/fedora38 00:01:17.166 ==> default: -- Storage pool: default 00:01:17.166 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720950827_734d47a6f2afe03ecacc.img (20G) 00:01:17.166 ==> default: -- Volume Cache: default 00:01:17.166 ==> default: -- Kernel: 00:01:17.166 ==> default: -- Initrd: 00:01:17.166 ==> default: -- Graphics Type: vnc 00:01:17.166 ==> default: -- Graphics Port: -1 00:01:17.166 ==> default: -- Graphics IP: 127.0.0.1 00:01:17.166 ==> default: -- Graphics Password: Not defined 00:01:17.166 ==> default: -- Video Type: cirrus 00:01:17.166 ==> default: -- Video VRAM: 9216 00:01:17.166 ==> default: -- Sound Type: 00:01:17.166 ==> default: -- Keymap: en-us 00:01:17.166 ==> default: -- TPM Path: 00:01:17.166 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:17.166 ==> default: -- Command line args: 00:01:17.166 ==> default: -> value=-device, 00:01:17.166 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:17.166 ==> default: -> value=-drive, 00:01:17.166 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:01:17.166 ==> default: -> value=-device, 00:01:17.166 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:17.166 ==> default: -> value=-device, 00:01:17.166 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:17.166 ==> default: -> value=-drive, 00:01:17.166 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:17.166 ==> default: -> value=-device, 00:01:17.166 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:17.166 ==> default: -> value=-drive, 00:01:17.166 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:17.166 ==> default: -> value=-device, 00:01:17.166 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:17.166 ==> default: -> value=-drive, 00:01:17.166 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:17.166 ==> default: -> value=-device, 00:01:17.166 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:17.425 ==> default: Creating shared folders metadata... 00:01:17.425 ==> default: Starting domain. 00:01:19.969 ==> default: Waiting for domain to get an IP address... 00:01:41.891 ==> default: Waiting for SSH to become available... 00:01:42.462 ==> default: Configuring and enabling network interfaces... 00:01:47.738 default: SSH address: 192.168.121.156:22 00:01:47.738 default: SSH username: vagrant 00:01:47.738 default: SSH auth method: private key 00:01:49.114 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:57.223 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:02.484 ==> default: Mounting SSHFS shared folder... 00:02:03.858 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:03.858 ==> default: Checking Mount.. 00:02:05.233 ==> default: Folder Successfully Mounted! 00:02:05.233 ==> default: Running provisioner: file... 00:02:06.168 default: ~/.gitconfig => .gitconfig 00:02:06.427 00:02:06.427 SUCCESS! 00:02:06.427 00:02:06.427 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:02:06.427 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:06.427 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:02:06.427 00:02:06.436 [Pipeline] } 00:02:06.456 [Pipeline] // stage 00:02:06.465 [Pipeline] dir 00:02:06.465 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:02:06.467 [Pipeline] { 00:02:06.480 [Pipeline] catchError 00:02:06.481 [Pipeline] { 00:02:06.494 [Pipeline] sh 00:02:06.774 + vagrant ssh-config --host vagrant 00:02:06.774 + sed -ne /^Host/,$p 00:02:06.774 + tee ssh_conf 00:02:10.071 Host vagrant 00:02:10.071 HostName 192.168.121.156 00:02:10.071 User vagrant 00:02:10.071 Port 22 00:02:10.071 UserKnownHostsFile /dev/null 00:02:10.071 StrictHostKeyChecking no 00:02:10.071 PasswordAuthentication no 00:02:10.071 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:10.071 IdentitiesOnly yes 00:02:10.071 LogLevel FATAL 00:02:10.071 ForwardAgent yes 00:02:10.071 ForwardX11 yes 00:02:10.071 00:02:10.084 [Pipeline] withEnv 00:02:10.086 [Pipeline] { 00:02:10.101 [Pipeline] sh 00:02:10.377 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:10.377 source /etc/os-release 00:02:10.377 [[ -e /image.version ]] && img=$(< /image.version) 00:02:10.377 # Minimal, systemd-like check. 00:02:10.377 if [[ -e /.dockerenv ]]; then 00:02:10.377 # Clear garbage from the node's name: 00:02:10.377 # agt-er_autotest_547-896 -> autotest_547-896 00:02:10.377 # $HOSTNAME is the actual container id 00:02:10.377 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:10.377 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:10.377 # We can assume this is a mount from a host where container is running, 00:02:10.377 # so fetch its hostname to easily identify the target swarm worker. 00:02:10.377 container="$(< /etc/hostname) ($agent)" 00:02:10.377 else 00:02:10.377 # Fallback 00:02:10.377 container=$agent 00:02:10.377 fi 00:02:10.377 fi 00:02:10.377 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:10.377 00:02:10.387 [Pipeline] } 00:02:10.408 [Pipeline] // withEnv 00:02:10.416 [Pipeline] setCustomBuildProperty 00:02:10.432 [Pipeline] stage 00:02:10.435 [Pipeline] { (Tests) 00:02:10.454 [Pipeline] sh 00:02:10.736 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:11.008 [Pipeline] sh 00:02:11.287 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:11.560 [Pipeline] timeout 00:02:11.561 Timeout set to expire in 40 min 00:02:11.563 [Pipeline] { 00:02:11.580 [Pipeline] sh 00:02:11.859 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:12.427 HEAD is now at 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:02:12.440 [Pipeline] sh 00:02:12.720 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:12.990 [Pipeline] sh 00:02:13.366 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:13.639 [Pipeline] sh 00:02:13.916 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-vg-autotest ./autoruner.sh spdk_repo 00:02:14.175 ++ readlink -f spdk_repo 00:02:14.175 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:14.175 + [[ -n /home/vagrant/spdk_repo ]] 00:02:14.175 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:14.175 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:14.175 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:14.175 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:14.175 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:14.175 + [[ nvmf-tcp-vg-autotest == pkgdep-* ]] 00:02:14.175 + cd /home/vagrant/spdk_repo 00:02:14.175 + source /etc/os-release 00:02:14.175 ++ NAME='Fedora Linux' 00:02:14.175 ++ VERSION='38 (Cloud Edition)' 00:02:14.175 ++ ID=fedora 00:02:14.175 ++ VERSION_ID=38 00:02:14.175 ++ VERSION_CODENAME= 00:02:14.175 ++ PLATFORM_ID=platform:f38 00:02:14.175 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:14.175 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:14.175 ++ LOGO=fedora-logo-icon 00:02:14.175 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:14.175 ++ HOME_URL=https://fedoraproject.org/ 00:02:14.175 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:14.175 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:14.175 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:14.175 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:14.175 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:14.175 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:14.175 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:14.175 ++ SUPPORT_END=2024-05-14 00:02:14.175 ++ VARIANT='Cloud Edition' 00:02:14.175 ++ VARIANT_ID=cloud 00:02:14.175 + uname -a 00:02:14.175 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:14.175 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:14.175 Hugepages 00:02:14.175 node hugesize free / total 00:02:14.175 node0 1048576kB 0 / 0 00:02:14.175 node0 2048kB 0 / 0 00:02:14.175 00:02:14.175 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:14.175 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:14.175 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:14.175 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:14.175 + rm -f /tmp/spdk-ld-path 00:02:14.175 + source autorun-spdk.conf 00:02:14.175 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:14.175 ++ SPDK_TEST_NVMF=1 00:02:14.175 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:14.175 ++ SPDK_TEST_USDT=1 00:02:14.175 ++ SPDK_RUN_UBSAN=1 00:02:14.175 ++ SPDK_TEST_NVMF_MDNS=1 00:02:14.175 ++ NET_TYPE=virt 00:02:14.175 ++ SPDK_JSONRPC_GO_CLIENT=1 00:02:14.175 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:14.175 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:14.175 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:14.175 ++ RUN_NIGHTLY=1 00:02:14.175 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:14.175 + [[ -n '' ]] 00:02:14.175 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:14.434 + for M in /var/spdk/build-*-manifest.txt 00:02:14.434 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:14.434 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:14.434 + for M in /var/spdk/build-*-manifest.txt 00:02:14.434 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:14.434 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:14.434 ++ uname 00:02:14.434 + [[ Linux == \L\i\n\u\x ]] 00:02:14.434 + sudo dmesg -T 00:02:14.434 + sudo dmesg --clear 00:02:14.434 + dmesg_pid=5877 00:02:14.434 + sudo dmesg -Tw 00:02:14.434 + [[ Fedora Linux == FreeBSD ]] 00:02:14.434 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:14.434 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:14.434 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:14.434 + [[ -x /usr/src/fio-static/fio ]] 00:02:14.434 + export FIO_BIN=/usr/src/fio-static/fio 00:02:14.434 + FIO_BIN=/usr/src/fio-static/fio 00:02:14.434 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:14.434 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:14.434 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:14.434 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:14.434 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:14.434 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:14.434 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:14.434 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:14.434 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:14.434 Test configuration: 00:02:14.434 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:14.434 SPDK_TEST_NVMF=1 00:02:14.434 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:14.434 SPDK_TEST_USDT=1 00:02:14.434 SPDK_RUN_UBSAN=1 00:02:14.434 SPDK_TEST_NVMF_MDNS=1 00:02:14.434 NET_TYPE=virt 00:02:14.434 SPDK_JSONRPC_GO_CLIENT=1 00:02:14.434 SPDK_TEST_NATIVE_DPDK=v23.11 00:02:14.434 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:14.434 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:14.434 RUN_NIGHTLY=1 09:54:44 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:14.434 09:54:44 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:14.434 09:54:44 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:14.434 09:54:44 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:14.434 09:54:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.434 09:54:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.434 09:54:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.434 09:54:44 -- paths/export.sh@5 -- $ export PATH 00:02:14.434 09:54:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:14.434 09:54:44 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:14.434 09:54:44 -- common/autobuild_common.sh@435 -- $ date +%s 00:02:14.434 09:54:44 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720950884.XXXXXX 00:02:14.434 09:54:44 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720950884.QwfmHj 00:02:14.434 09:54:44 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:02:14.434 09:54:44 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:02:14.434 09:54:44 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:14.434 09:54:44 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:14.434 09:54:44 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:14.434 09:54:44 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:14.435 09:54:44 -- common/autobuild_common.sh@451 -- $ get_config_params 00:02:14.435 09:54:44 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:02:14.435 09:54:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.435 09:54:44 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:02:14.435 09:54:44 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:14.435 09:54:44 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:14.435 09:54:44 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:14.435 09:54:44 -- spdk/autobuild.sh@16 -- $ date -u 00:02:14.435 Sun Jul 14 09:54:44 AM UTC 2024 00:02:14.435 09:54:44 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:14.435 LTS-59-g4b94202c6 00:02:14.435 09:54:44 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:14.435 09:54:44 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:14.435 09:54:44 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:14.435 09:54:44 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:14.435 09:54:44 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:14.435 09:54:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.435 ************************************ 00:02:14.435 START TEST ubsan 00:02:14.435 ************************************ 00:02:14.435 using ubsan 00:02:14.435 09:54:44 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:02:14.435 00:02:14.435 real 0m0.000s 00:02:14.435 user 0m0.000s 00:02:14.435 sys 0m0.000s 00:02:14.435 09:54:44 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:14.435 09:54:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.435 ************************************ 00:02:14.435 END TEST ubsan 00:02:14.435 ************************************ 00:02:14.694 09:54:45 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:14.694 09:54:45 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:14.694 09:54:45 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:14.694 09:54:45 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:02:14.694 09:54:45 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:14.694 09:54:45 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.694 ************************************ 00:02:14.694 START TEST build_native_dpdk 00:02:14.694 ************************************ 00:02:14.694 09:54:45 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:02:14.694 09:54:45 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:14.694 09:54:45 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:14.694 09:54:45 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:14.694 09:54:45 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:14.694 09:54:45 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:14.694 09:54:45 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:14.694 09:54:45 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:14.694 09:54:45 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:14.694 09:54:45 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:14.694 09:54:45 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:14.694 09:54:45 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:14.694 09:54:45 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:14.694 09:54:45 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:14.694 09:54:45 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:14.694 09:54:45 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:14.694 09:54:45 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:14.694 09:54:45 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:14.694 09:54:45 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:14.694 09:54:45 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:14.694 09:54:45 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:14.694 eeb0605f11 version: 23.11.0 00:02:14.694 238778122a doc: update release notes for 23.11 00:02:14.694 46aa6b3cfc doc: fix description of RSS features 00:02:14.694 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:14.694 7e421ae345 devtools: support skipping forbid rule check 00:02:14.694 09:54:45 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:14.694 09:54:45 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:14.694 09:54:45 -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:14.694 09:54:45 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:14.694 09:54:45 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:14.694 09:54:45 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:14.694 09:54:45 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:14.694 09:54:45 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:14.694 09:54:45 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:14.694 09:54:45 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:14.694 09:54:45 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:14.694 09:54:45 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:14.694 09:54:45 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:14.694 09:54:45 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:14.694 09:54:45 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:14.694 09:54:45 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:14.694 09:54:45 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:14.694 09:54:45 -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:02:14.694 09:54:45 -- scripts/common.sh@372 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:14.694 09:54:45 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:14.694 09:54:45 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:14.694 09:54:45 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:14.694 09:54:45 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:14.694 09:54:45 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:14.694 09:54:45 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:14.694 09:54:45 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:14.694 09:54:45 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:14.694 09:54:45 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:14.694 09:54:45 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:14.694 09:54:45 -- scripts/common.sh@343 -- $ case "$op" in 00:02:14.694 09:54:45 -- scripts/common.sh@344 -- $ : 1 00:02:14.694 09:54:45 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:14.694 09:54:45 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:14.694 09:54:45 -- scripts/common.sh@364 -- $ decimal 23 00:02:14.694 09:54:45 -- scripts/common.sh@352 -- $ local d=23 00:02:14.694 09:54:45 -- scripts/common.sh@353 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:14.694 09:54:45 -- scripts/common.sh@354 -- $ echo 23 00:02:14.694 09:54:45 -- scripts/common.sh@364 -- $ ver1[v]=23 00:02:14.694 09:54:45 -- scripts/common.sh@365 -- $ decimal 21 00:02:14.694 09:54:45 -- scripts/common.sh@352 -- $ local d=21 00:02:14.694 09:54:45 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:14.694 09:54:45 -- scripts/common.sh@354 -- $ echo 21 00:02:14.694 09:54:45 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:14.694 09:54:45 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:14.694 09:54:45 -- scripts/common.sh@366 -- $ return 1 00:02:14.694 09:54:45 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:14.694 patching file config/rte_config.h 00:02:14.694 Hunk #1 succeeded at 60 (offset 1 line). 00:02:14.694 09:54:45 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:02:14.694 09:54:45 -- common/autobuild_common.sh@178 -- $ uname -s 00:02:14.694 09:54:45 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:02:14.694 09:54:45 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:14.694 09:54:45 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:19.986 The Meson build system 00:02:19.986 Version: 1.3.1 00:02:19.986 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:19.986 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:19.986 Build type: native build 00:02:19.986 Program cat found: YES (/usr/bin/cat) 00:02:19.986 Project name: DPDK 00:02:19.986 Project version: 23.11.0 00:02:19.986 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:19.986 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:19.986 Host machine cpu family: x86_64 00:02:19.986 Host machine cpu: x86_64 00:02:19.986 Message: ## Building in Developer Mode ## 00:02:19.986 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:19.986 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:19.986 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:19.986 Program python3 found: YES (/usr/bin/python3) 00:02:19.986 Program cat found: YES (/usr/bin/cat) 00:02:19.986 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:19.986 Compiler for C supports arguments -march=native: YES 00:02:19.986 Checking for size of "void *" : 8 00:02:19.986 Checking for size of "void *" : 8 (cached) 00:02:19.986 Library m found: YES 00:02:19.986 Library numa found: YES 00:02:19.986 Has header "numaif.h" : YES 00:02:19.986 Library fdt found: NO 00:02:19.986 Library execinfo found: NO 00:02:19.986 Has header "execinfo.h" : YES 00:02:19.986 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:19.986 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:19.986 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:19.986 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:19.986 Run-time dependency openssl found: YES 3.0.9 00:02:19.986 Run-time dependency libpcap found: YES 1.10.4 00:02:19.986 Has header "pcap.h" with dependency libpcap: YES 00:02:19.986 Compiler for C supports arguments -Wcast-qual: YES 00:02:19.986 Compiler for C supports arguments -Wdeprecated: YES 00:02:19.986 Compiler for C supports arguments -Wformat: YES 00:02:19.986 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:19.986 Compiler for C supports arguments -Wformat-security: NO 00:02:19.986 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:19.986 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:19.986 Compiler for C supports arguments -Wnested-externs: YES 00:02:19.986 Compiler for C supports arguments -Wold-style-definition: YES 00:02:19.986 Compiler for C supports arguments -Wpointer-arith: YES 00:02:19.986 Compiler for C supports arguments -Wsign-compare: YES 00:02:19.986 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:19.986 Compiler for C supports arguments -Wundef: YES 00:02:19.986 Compiler for C supports arguments -Wwrite-strings: YES 00:02:19.986 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:19.986 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:19.986 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:19.986 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:19.986 Program objdump found: YES (/usr/bin/objdump) 00:02:19.986 Compiler for C supports arguments -mavx512f: YES 00:02:19.986 Checking if "AVX512 checking" compiles: YES 00:02:19.986 Fetching value of define "__SSE4_2__" : 1 00:02:19.986 Fetching value of define "__AES__" : 1 00:02:19.986 Fetching value of define "__AVX__" : 1 00:02:19.986 Fetching value of define "__AVX2__" : 1 00:02:19.986 Fetching value of define "__AVX512BW__" : (undefined) 00:02:19.986 Fetching value of define "__AVX512CD__" : (undefined) 00:02:19.986 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:19.986 Fetching value of define "__AVX512F__" : (undefined) 00:02:19.986 Fetching value of define "__AVX512VL__" : (undefined) 00:02:19.986 Fetching value of define "__PCLMUL__" : 1 00:02:19.986 Fetching value of define "__RDRND__" : 1 00:02:19.986 Fetching value of define "__RDSEED__" : 1 00:02:19.986 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:19.986 Fetching value of define "__znver1__" : (undefined) 00:02:19.986 Fetching value of define "__znver2__" : (undefined) 00:02:19.986 Fetching value of define "__znver3__" : (undefined) 00:02:19.986 Fetching value of define "__znver4__" : (undefined) 00:02:19.986 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:19.986 Message: lib/log: Defining dependency "log" 00:02:19.986 Message: lib/kvargs: Defining dependency "kvargs" 00:02:19.986 Message: lib/telemetry: Defining dependency "telemetry" 00:02:19.986 Checking for function "getentropy" : NO 00:02:19.986 Message: lib/eal: Defining dependency "eal" 00:02:19.986 Message: lib/ring: Defining dependency "ring" 00:02:19.986 Message: lib/rcu: Defining dependency "rcu" 00:02:19.986 Message: lib/mempool: Defining dependency "mempool" 00:02:19.986 Message: lib/mbuf: Defining dependency "mbuf" 00:02:19.986 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:19.986 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:19.986 Compiler for C supports arguments -mpclmul: YES 00:02:19.986 Compiler for C supports arguments -maes: YES 00:02:19.986 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:19.986 Compiler for C supports arguments -mavx512bw: YES 00:02:19.986 Compiler for C supports arguments -mavx512dq: YES 00:02:19.986 Compiler for C supports arguments -mavx512vl: YES 00:02:19.986 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:19.986 Compiler for C supports arguments -mavx2: YES 00:02:19.986 Compiler for C supports arguments -mavx: YES 00:02:19.986 Message: lib/net: Defining dependency "net" 00:02:19.986 Message: lib/meter: Defining dependency "meter" 00:02:19.986 Message: lib/ethdev: Defining dependency "ethdev" 00:02:19.986 Message: lib/pci: Defining dependency "pci" 00:02:19.986 Message: lib/cmdline: Defining dependency "cmdline" 00:02:19.986 Message: lib/metrics: Defining dependency "metrics" 00:02:19.986 Message: lib/hash: Defining dependency "hash" 00:02:19.986 Message: lib/timer: Defining dependency "timer" 00:02:19.986 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:19.986 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:19.986 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:19.986 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:19.986 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:19.986 Message: lib/acl: Defining dependency "acl" 00:02:19.986 Message: lib/bbdev: Defining dependency "bbdev" 00:02:19.986 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:19.986 Run-time dependency libelf found: YES 0.190 00:02:19.986 Message: lib/bpf: Defining dependency "bpf" 00:02:19.986 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:19.986 Message: lib/compressdev: Defining dependency "compressdev" 00:02:19.986 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:19.986 Message: lib/distributor: Defining dependency "distributor" 00:02:19.986 Message: lib/dmadev: Defining dependency "dmadev" 00:02:19.986 Message: lib/efd: Defining dependency "efd" 00:02:19.986 Message: lib/eventdev: Defining dependency "eventdev" 00:02:19.986 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:19.986 Message: lib/gpudev: Defining dependency "gpudev" 00:02:19.986 Message: lib/gro: Defining dependency "gro" 00:02:19.986 Message: lib/gso: Defining dependency "gso" 00:02:19.986 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:19.986 Message: lib/jobstats: Defining dependency "jobstats" 00:02:19.986 Message: lib/latencystats: Defining dependency "latencystats" 00:02:19.986 Message: lib/lpm: Defining dependency "lpm" 00:02:19.986 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:19.986 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:19.986 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:19.986 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:19.986 Message: lib/member: Defining dependency "member" 00:02:19.986 Message: lib/pcapng: Defining dependency "pcapng" 00:02:19.986 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:19.986 Message: lib/power: Defining dependency "power" 00:02:19.986 Message: lib/rawdev: Defining dependency "rawdev" 00:02:19.986 Message: lib/regexdev: Defining dependency "regexdev" 00:02:19.986 Message: lib/mldev: Defining dependency "mldev" 00:02:19.986 Message: lib/rib: Defining dependency "rib" 00:02:19.986 Message: lib/reorder: Defining dependency "reorder" 00:02:19.986 Message: lib/sched: Defining dependency "sched" 00:02:19.986 Message: lib/security: Defining dependency "security" 00:02:19.986 Message: lib/stack: Defining dependency "stack" 00:02:19.986 Has header "linux/userfaultfd.h" : YES 00:02:19.986 Has header "linux/vduse.h" : YES 00:02:19.986 Message: lib/vhost: Defining dependency "vhost" 00:02:19.986 Message: lib/ipsec: Defining dependency "ipsec" 00:02:19.986 Message: lib/pdcp: Defining dependency "pdcp" 00:02:19.986 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:19.986 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:19.986 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:19.986 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:19.986 Message: lib/fib: Defining dependency "fib" 00:02:19.986 Message: lib/port: Defining dependency "port" 00:02:19.986 Message: lib/pdump: Defining dependency "pdump" 00:02:19.987 Message: lib/table: Defining dependency "table" 00:02:19.987 Message: lib/pipeline: Defining dependency "pipeline" 00:02:19.987 Message: lib/graph: Defining dependency "graph" 00:02:19.987 Message: lib/node: Defining dependency "node" 00:02:19.987 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:21.363 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:21.363 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:21.363 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:21.363 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:21.363 Compiler for C supports arguments -Wno-unused-value: YES 00:02:21.363 Compiler for C supports arguments -Wno-format: YES 00:02:21.363 Compiler for C supports arguments -Wno-format-security: YES 00:02:21.363 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:21.363 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:21.363 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:21.363 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:21.363 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:21.363 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:21.363 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:21.363 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:21.363 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:21.363 Has header "sys/epoll.h" : YES 00:02:21.363 Program doxygen found: YES (/usr/bin/doxygen) 00:02:21.363 Configuring doxy-api-html.conf using configuration 00:02:21.363 Configuring doxy-api-man.conf using configuration 00:02:21.363 Program mandb found: YES (/usr/bin/mandb) 00:02:21.363 Program sphinx-build found: NO 00:02:21.363 Configuring rte_build_config.h using configuration 00:02:21.363 Message: 00:02:21.363 ================= 00:02:21.363 Applications Enabled 00:02:21.363 ================= 00:02:21.363 00:02:21.363 apps: 00:02:21.363 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:21.363 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:21.363 test-pmd, test-regex, test-sad, test-security-perf, 00:02:21.363 00:02:21.363 Message: 00:02:21.363 ================= 00:02:21.363 Libraries Enabled 00:02:21.363 ================= 00:02:21.363 00:02:21.363 libs: 00:02:21.363 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:21.363 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:21.363 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:21.363 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:21.363 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:21.363 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:21.363 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:21.363 00:02:21.363 00:02:21.363 Message: 00:02:21.363 =============== 00:02:21.363 Drivers Enabled 00:02:21.363 =============== 00:02:21.363 00:02:21.363 common: 00:02:21.363 00:02:21.363 bus: 00:02:21.363 pci, vdev, 00:02:21.363 mempool: 00:02:21.363 ring, 00:02:21.363 dma: 00:02:21.363 00:02:21.363 net: 00:02:21.363 i40e, 00:02:21.363 raw: 00:02:21.363 00:02:21.363 crypto: 00:02:21.363 00:02:21.363 compress: 00:02:21.363 00:02:21.363 regex: 00:02:21.363 00:02:21.363 ml: 00:02:21.363 00:02:21.363 vdpa: 00:02:21.363 00:02:21.363 event: 00:02:21.363 00:02:21.363 baseband: 00:02:21.363 00:02:21.363 gpu: 00:02:21.363 00:02:21.363 00:02:21.363 Message: 00:02:21.363 ================= 00:02:21.363 Content Skipped 00:02:21.363 ================= 00:02:21.363 00:02:21.363 apps: 00:02:21.363 00:02:21.363 libs: 00:02:21.363 00:02:21.363 drivers: 00:02:21.363 common/cpt: not in enabled drivers build config 00:02:21.363 common/dpaax: not in enabled drivers build config 00:02:21.363 common/iavf: not in enabled drivers build config 00:02:21.363 common/idpf: not in enabled drivers build config 00:02:21.363 common/mvep: not in enabled drivers build config 00:02:21.363 common/octeontx: not in enabled drivers build config 00:02:21.364 bus/auxiliary: not in enabled drivers build config 00:02:21.364 bus/cdx: not in enabled drivers build config 00:02:21.364 bus/dpaa: not in enabled drivers build config 00:02:21.364 bus/fslmc: not in enabled drivers build config 00:02:21.364 bus/ifpga: not in enabled drivers build config 00:02:21.364 bus/platform: not in enabled drivers build config 00:02:21.364 bus/vmbus: not in enabled drivers build config 00:02:21.364 common/cnxk: not in enabled drivers build config 00:02:21.364 common/mlx5: not in enabled drivers build config 00:02:21.364 common/nfp: not in enabled drivers build config 00:02:21.364 common/qat: not in enabled drivers build config 00:02:21.364 common/sfc_efx: not in enabled drivers build config 00:02:21.364 mempool/bucket: not in enabled drivers build config 00:02:21.364 mempool/cnxk: not in enabled drivers build config 00:02:21.364 mempool/dpaa: not in enabled drivers build config 00:02:21.364 mempool/dpaa2: not in enabled drivers build config 00:02:21.364 mempool/octeontx: not in enabled drivers build config 00:02:21.364 mempool/stack: not in enabled drivers build config 00:02:21.364 dma/cnxk: not in enabled drivers build config 00:02:21.364 dma/dpaa: not in enabled drivers build config 00:02:21.364 dma/dpaa2: not in enabled drivers build config 00:02:21.364 dma/hisilicon: not in enabled drivers build config 00:02:21.364 dma/idxd: not in enabled drivers build config 00:02:21.364 dma/ioat: not in enabled drivers build config 00:02:21.364 dma/skeleton: not in enabled drivers build config 00:02:21.364 net/af_packet: not in enabled drivers build config 00:02:21.364 net/af_xdp: not in enabled drivers build config 00:02:21.364 net/ark: not in enabled drivers build config 00:02:21.364 net/atlantic: not in enabled drivers build config 00:02:21.364 net/avp: not in enabled drivers build config 00:02:21.364 net/axgbe: not in enabled drivers build config 00:02:21.364 net/bnx2x: not in enabled drivers build config 00:02:21.364 net/bnxt: not in enabled drivers build config 00:02:21.364 net/bonding: not in enabled drivers build config 00:02:21.364 net/cnxk: not in enabled drivers build config 00:02:21.364 net/cpfl: not in enabled drivers build config 00:02:21.364 net/cxgbe: not in enabled drivers build config 00:02:21.364 net/dpaa: not in enabled drivers build config 00:02:21.364 net/dpaa2: not in enabled drivers build config 00:02:21.364 net/e1000: not in enabled drivers build config 00:02:21.364 net/ena: not in enabled drivers build config 00:02:21.364 net/enetc: not in enabled drivers build config 00:02:21.364 net/enetfec: not in enabled drivers build config 00:02:21.364 net/enic: not in enabled drivers build config 00:02:21.364 net/failsafe: not in enabled drivers build config 00:02:21.364 net/fm10k: not in enabled drivers build config 00:02:21.364 net/gve: not in enabled drivers build config 00:02:21.364 net/hinic: not in enabled drivers build config 00:02:21.364 net/hns3: not in enabled drivers build config 00:02:21.364 net/iavf: not in enabled drivers build config 00:02:21.364 net/ice: not in enabled drivers build config 00:02:21.364 net/idpf: not in enabled drivers build config 00:02:21.364 net/igc: not in enabled drivers build config 00:02:21.364 net/ionic: not in enabled drivers build config 00:02:21.364 net/ipn3ke: not in enabled drivers build config 00:02:21.364 net/ixgbe: not in enabled drivers build config 00:02:21.364 net/mana: not in enabled drivers build config 00:02:21.364 net/memif: not in enabled drivers build config 00:02:21.364 net/mlx4: not in enabled drivers build config 00:02:21.364 net/mlx5: not in enabled drivers build config 00:02:21.364 net/mvneta: not in enabled drivers build config 00:02:21.364 net/mvpp2: not in enabled drivers build config 00:02:21.364 net/netvsc: not in enabled drivers build config 00:02:21.364 net/nfb: not in enabled drivers build config 00:02:21.364 net/nfp: not in enabled drivers build config 00:02:21.364 net/ngbe: not in enabled drivers build config 00:02:21.364 net/null: not in enabled drivers build config 00:02:21.364 net/octeontx: not in enabled drivers build config 00:02:21.364 net/octeon_ep: not in enabled drivers build config 00:02:21.364 net/pcap: not in enabled drivers build config 00:02:21.364 net/pfe: not in enabled drivers build config 00:02:21.364 net/qede: not in enabled drivers build config 00:02:21.364 net/ring: not in enabled drivers build config 00:02:21.364 net/sfc: not in enabled drivers build config 00:02:21.364 net/softnic: not in enabled drivers build config 00:02:21.364 net/tap: not in enabled drivers build config 00:02:21.364 net/thunderx: not in enabled drivers build config 00:02:21.364 net/txgbe: not in enabled drivers build config 00:02:21.364 net/vdev_netvsc: not in enabled drivers build config 00:02:21.364 net/vhost: not in enabled drivers build config 00:02:21.364 net/virtio: not in enabled drivers build config 00:02:21.364 net/vmxnet3: not in enabled drivers build config 00:02:21.364 raw/cnxk_bphy: not in enabled drivers build config 00:02:21.364 raw/cnxk_gpio: not in enabled drivers build config 00:02:21.364 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:21.364 raw/ifpga: not in enabled drivers build config 00:02:21.364 raw/ntb: not in enabled drivers build config 00:02:21.364 raw/skeleton: not in enabled drivers build config 00:02:21.364 crypto/armv8: not in enabled drivers build config 00:02:21.364 crypto/bcmfs: not in enabled drivers build config 00:02:21.364 crypto/caam_jr: not in enabled drivers build config 00:02:21.364 crypto/ccp: not in enabled drivers build config 00:02:21.364 crypto/cnxk: not in enabled drivers build config 00:02:21.364 crypto/dpaa_sec: not in enabled drivers build config 00:02:21.364 crypto/dpaa2_sec: not in enabled drivers build config 00:02:21.364 crypto/ipsec_mb: not in enabled drivers build config 00:02:21.364 crypto/mlx5: not in enabled drivers build config 00:02:21.364 crypto/mvsam: not in enabled drivers build config 00:02:21.364 crypto/nitrox: not in enabled drivers build config 00:02:21.364 crypto/null: not in enabled drivers build config 00:02:21.364 crypto/octeontx: not in enabled drivers build config 00:02:21.364 crypto/openssl: not in enabled drivers build config 00:02:21.364 crypto/scheduler: not in enabled drivers build config 00:02:21.364 crypto/uadk: not in enabled drivers build config 00:02:21.364 crypto/virtio: not in enabled drivers build config 00:02:21.364 compress/isal: not in enabled drivers build config 00:02:21.364 compress/mlx5: not in enabled drivers build config 00:02:21.364 compress/octeontx: not in enabled drivers build config 00:02:21.364 compress/zlib: not in enabled drivers build config 00:02:21.364 regex/mlx5: not in enabled drivers build config 00:02:21.364 regex/cn9k: not in enabled drivers build config 00:02:21.364 ml/cnxk: not in enabled drivers build config 00:02:21.364 vdpa/ifc: not in enabled drivers build config 00:02:21.364 vdpa/mlx5: not in enabled drivers build config 00:02:21.364 vdpa/nfp: not in enabled drivers build config 00:02:21.364 vdpa/sfc: not in enabled drivers build config 00:02:21.364 event/cnxk: not in enabled drivers build config 00:02:21.364 event/dlb2: not in enabled drivers build config 00:02:21.364 event/dpaa: not in enabled drivers build config 00:02:21.364 event/dpaa2: not in enabled drivers build config 00:02:21.364 event/dsw: not in enabled drivers build config 00:02:21.364 event/opdl: not in enabled drivers build config 00:02:21.364 event/skeleton: not in enabled drivers build config 00:02:21.364 event/sw: not in enabled drivers build config 00:02:21.364 event/octeontx: not in enabled drivers build config 00:02:21.364 baseband/acc: not in enabled drivers build config 00:02:21.364 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:21.364 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:21.364 baseband/la12xx: not in enabled drivers build config 00:02:21.364 baseband/null: not in enabled drivers build config 00:02:21.364 baseband/turbo_sw: not in enabled drivers build config 00:02:21.364 gpu/cuda: not in enabled drivers build config 00:02:21.364 00:02:21.364 00:02:21.364 Build targets in project: 220 00:02:21.364 00:02:21.364 DPDK 23.11.0 00:02:21.364 00:02:21.364 User defined options 00:02:21.364 libdir : lib 00:02:21.364 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:21.364 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:21.364 c_link_args : 00:02:21.364 enable_docs : false 00:02:21.364 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:21.364 enable_kmods : false 00:02:21.364 machine : native 00:02:21.364 tests : false 00:02:21.364 00:02:21.364 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:21.364 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:21.364 09:54:51 -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:21.622 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:21.622 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:21.622 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:21.622 [3/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:21.622 [4/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:21.622 [5/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:21.622 [6/710] Linking static target lib/librte_kvargs.a 00:02:21.622 [7/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:21.880 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:21.880 [9/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:21.880 [10/710] Linking static target lib/librte_log.a 00:02:21.880 [11/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.138 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:22.138 [13/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:22.395 [14/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:22.396 [15/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.396 [16/710] Linking target lib/librte_log.so.24.0 00:02:22.396 [17/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:22.396 [18/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:22.396 [19/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:22.653 [20/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:22.653 [21/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:22.653 [22/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:22.653 [23/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:22.653 [24/710] Linking target lib/librte_kvargs.so.24.0 00:02:22.909 [25/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:22.909 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:22.909 [27/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:22.909 [28/710] Linking static target lib/librte_telemetry.a 00:02:23.164 [29/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:23.164 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:23.164 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:23.164 [32/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:23.420 [33/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:23.420 [34/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:23.420 [35/710] Linking target lib/librte_telemetry.so.24.0 00:02:23.420 [36/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:23.420 [37/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:23.420 [38/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:23.420 [39/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:23.420 [40/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:23.420 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:23.420 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:23.420 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:23.678 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:23.935 [45/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:23.935 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:23.935 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:24.192 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:24.192 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:24.192 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:24.192 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:24.192 [52/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:24.192 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:24.192 [54/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:24.449 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:24.449 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:24.449 [57/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:24.449 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:24.705 [59/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:24.705 [60/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:24.705 [61/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:24.705 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:24.705 [63/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:24.705 [64/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:24.705 [65/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:24.962 [66/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:24.962 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:24.962 [68/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:25.219 [69/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:25.219 [70/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:25.219 [71/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:25.219 [72/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:25.219 [73/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:25.475 [74/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:25.475 [75/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:25.475 [76/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:25.475 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:25.731 [78/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:25.731 [79/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:25.731 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:25.987 [81/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:25.987 [82/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:25.987 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:25.987 [84/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:25.987 [85/710] Linking static target lib/librte_ring.a 00:02:26.244 [86/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:26.244 [87/710] Linking static target lib/librte_eal.a 00:02:26.244 [88/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:26.244 [89/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.501 [90/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:26.501 [91/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:26.501 [92/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:26.501 [93/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:26.501 [94/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:26.501 [95/710] Linking static target lib/librte_mempool.a 00:02:26.758 [96/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:26.758 [97/710] Linking static target lib/librte_rcu.a 00:02:27.016 [98/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:27.016 [99/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:27.016 [100/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:27.016 [101/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.274 [102/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.274 [103/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:27.274 [104/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:27.274 [105/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:27.274 [106/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:27.274 [107/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:27.531 [108/710] Linking static target lib/librte_mbuf.a 00:02:27.531 [109/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:27.531 [110/710] Linking static target lib/librte_net.a 00:02:27.789 [111/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:27.789 [112/710] Linking static target lib/librte_meter.a 00:02:27.789 [113/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.789 [114/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:28.046 [115/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:28.046 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:28.046 [117/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.046 [118/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.046 [119/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:28.611 [120/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:28.611 [121/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:28.868 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:28.868 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:29.126 [124/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:29.126 [125/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:29.126 [126/710] Linking static target lib/librte_pci.a 00:02:29.126 [127/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:29.126 [128/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:29.126 [129/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.126 [130/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:29.383 [131/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:29.383 [132/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:29.383 [133/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:29.383 [134/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:29.383 [135/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:29.641 [136/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:29.641 [137/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:29.641 [138/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:29.641 [139/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:29.641 [140/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:29.641 [141/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:29.901 [142/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:29.901 [143/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:29.901 [144/710] Linking static target lib/librte_cmdline.a 00:02:29.901 [145/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:30.165 [146/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:30.165 [147/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:30.165 [148/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:30.165 [149/710] Linking static target lib/librte_metrics.a 00:02:30.423 [150/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:30.696 [151/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.696 [152/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.954 [153/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:30.954 [154/710] Linking static target lib/librte_timer.a 00:02:30.954 [155/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:31.211 [156/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.469 [157/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:31.726 [158/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:31.726 [159/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:31.983 [160/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:32.240 [161/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:32.240 [162/710] Linking static target lib/librte_bitratestats.a 00:02:32.240 [163/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:32.240 [164/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:32.496 [165/710] Linking static target lib/librte_ethdev.a 00:02:32.496 [166/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.496 [167/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:32.496 [168/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:32.765 [169/710] Linking static target lib/librte_bbdev.a 00:02:32.765 [170/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.765 [171/710] Linking target lib/librte_eal.so.24.0 00:02:32.765 [172/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:32.765 [173/710] Linking static target lib/librte_hash.a 00:02:32.765 [174/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:33.025 [175/710] Linking target lib/librte_ring.so.24.0 00:02:33.025 [176/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:33.025 [177/710] Linking target lib/librte_meter.so.24.0 00:02:33.025 [178/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:33.025 [179/710] Linking target lib/librte_rcu.so.24.0 00:02:33.281 [180/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:33.281 [181/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:33.281 [182/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:33.281 [183/710] Linking target lib/librte_mempool.so.24.0 00:02:33.281 [184/710] Linking target lib/librte_pci.so.24.0 00:02:33.281 [185/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:33.281 [186/710] Linking static target lib/acl/libavx2_tmp.a 00:02:33.281 [187/710] Linking target lib/librte_timer.so.24.0 00:02:33.281 [188/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.538 [189/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:33.538 [190/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.538 [191/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:33.538 [192/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:33.538 [193/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:33.538 [194/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:33.538 [195/710] Linking target lib/librte_mbuf.so.24.0 00:02:33.538 [196/710] Linking static target lib/acl/libavx512_tmp.a 00:02:33.538 [197/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:33.538 [198/710] Linking target lib/librte_net.so.24.0 00:02:33.795 [199/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:33.795 [200/710] Linking static target lib/librte_acl.a 00:02:33.795 [201/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:33.795 [202/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:33.795 [203/710] Linking target lib/librte_cmdline.so.24.0 00:02:33.795 [204/710] Linking target lib/librte_hash.so.24.0 00:02:33.795 [205/710] Linking target lib/librte_bbdev.so.24.0 00:02:33.795 [206/710] Linking static target lib/librte_cfgfile.a 00:02:34.053 [207/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:34.053 [208/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:34.053 [209/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.053 [210/710] Linking target lib/librte_acl.so.24.0 00:02:34.053 [211/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:34.310 [212/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.310 [213/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:34.310 [214/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:34.310 [215/710] Linking target lib/librte_cfgfile.so.24.0 00:02:34.310 [216/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:34.568 [217/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:34.825 [218/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:34.825 [219/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:35.083 [220/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:35.083 [221/710] Linking static target lib/librte_bpf.a 00:02:35.083 [222/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:35.083 [223/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:35.083 [224/710] Linking static target lib/librte_compressdev.a 00:02:35.083 [225/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:35.341 [226/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.341 [227/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:35.341 [228/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:35.341 [229/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:35.341 [230/710] Linking static target lib/librte_distributor.a 00:02:35.600 [231/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:35.600 [232/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.600 [233/710] Linking target lib/librte_compressdev.so.24.0 00:02:35.600 [234/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.858 [235/710] Linking target lib/librte_distributor.so.24.0 00:02:35.858 [236/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:35.858 [237/710] Linking static target lib/librte_dmadev.a 00:02:36.117 [238/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:36.375 [239/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.375 [240/710] Linking target lib/librte_dmadev.so.24.0 00:02:36.375 [241/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:36.375 [242/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:36.633 [243/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:36.891 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:36.891 [245/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:36.891 [246/710] Linking static target lib/librte_efd.a 00:02:36.891 [247/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:36.891 [248/710] Linking static target lib/librte_cryptodev.a 00:02:36.891 [249/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:37.150 [250/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.150 [251/710] Linking target lib/librte_efd.so.24.0 00:02:37.408 [252/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.408 [253/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:37.408 [254/710] Linking target lib/librte_ethdev.so.24.0 00:02:37.665 [255/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:37.665 [256/710] Linking target lib/librte_metrics.so.24.0 00:02:37.665 [257/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:37.665 [258/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:37.665 [259/710] Linking static target lib/librte_dispatcher.a 00:02:37.665 [260/710] Linking target lib/librte_bpf.so.24.0 00:02:37.665 [261/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:37.923 [262/710] Linking target lib/librte_bitratestats.so.24.0 00:02:37.923 [263/710] Linking static target lib/librte_gpudev.a 00:02:37.923 [264/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:37.923 [265/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:37.923 [266/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:37.923 [267/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:38.180 [268/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.180 [269/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.180 [270/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:38.180 [271/710] Linking target lib/librte_cryptodev.so.24.0 00:02:38.480 [272/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:38.480 [273/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:38.738 [274/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.738 [275/710] Linking target lib/librte_gpudev.so.24.0 00:02:38.738 [276/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:38.738 [277/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:38.738 [278/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:38.738 [279/710] Linking static target lib/librte_gro.a 00:02:38.738 [280/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:38.738 [281/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:38.738 [282/710] Linking static target lib/librte_eventdev.a 00:02:38.997 [283/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:38.997 [284/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:38.997 [285/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:38.997 [286/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:38.997 [287/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.997 [288/710] Linking target lib/librte_gro.so.24.0 00:02:39.256 [289/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:39.514 [290/710] Linking static target lib/librte_gso.a 00:02:39.514 [291/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:39.514 [292/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.514 [293/710] Linking target lib/librte_gso.so.24.0 00:02:39.771 [294/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:39.771 [295/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:39.771 [296/710] Linking static target lib/librte_jobstats.a 00:02:39.772 [297/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:39.772 [298/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:39.772 [299/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:40.029 [300/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:40.029 [301/710] Linking static target lib/librte_ip_frag.a 00:02:40.029 [302/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:40.029 [303/710] Linking static target lib/librte_latencystats.a 00:02:40.029 [304/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.029 [305/710] Linking target lib/librte_jobstats.so.24.0 00:02:40.029 [306/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.286 [307/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.286 [308/710] Linking target lib/librte_latencystats.so.24.0 00:02:40.286 [309/710] Linking target lib/librte_ip_frag.so.24.0 00:02:40.286 [310/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:40.286 [311/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:40.543 [312/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:40.543 [313/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:40.543 [314/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:40.543 [315/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:40.543 [316/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:40.544 [317/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:40.801 [318/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.058 [319/710] Linking target lib/librte_eventdev.so.24.0 00:02:41.058 [320/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:41.058 [321/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:41.058 [322/710] Linking static target lib/librte_lpm.a 00:02:41.058 [323/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:41.058 [324/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:41.058 [325/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:41.058 [326/710] Linking target lib/librte_dispatcher.so.24.0 00:02:41.058 [327/710] Linking static target lib/librte_pcapng.a 00:02:41.316 [328/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:41.316 [329/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:41.316 [330/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:41.316 [331/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:41.316 [332/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.574 [333/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.574 [334/710] Linking target lib/librte_lpm.so.24.0 00:02:41.574 [335/710] Linking target lib/librte_pcapng.so.24.0 00:02:41.574 [336/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:41.574 [337/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:41.832 [338/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:41.832 [339/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:41.832 [340/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:42.089 [341/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:42.089 [342/710] Linking static target lib/librte_power.a 00:02:42.089 [343/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:42.089 [344/710] Linking static target lib/librte_regexdev.a 00:02:42.089 [345/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:42.089 [346/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:42.090 [347/710] Linking static target lib/librte_rawdev.a 00:02:42.090 [348/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:42.090 [349/710] Linking static target lib/librte_member.a 00:02:42.348 [350/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:42.348 [351/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:42.348 [352/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.606 [353/710] Linking target lib/librte_member.so.24.0 00:02:42.606 [354/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:42.606 [355/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:42.606 [356/710] Linking static target lib/librte_mldev.a 00:02:42.606 [357/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.606 [358/710] Linking target lib/librte_rawdev.so.24.0 00:02:42.606 [359/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:42.606 [360/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.866 [361/710] Linking target lib/librte_power.so.24.0 00:02:42.866 [362/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.866 [363/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:42.866 [364/710] Linking target lib/librte_regexdev.so.24.0 00:02:43.123 [365/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:43.123 [366/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:43.123 [367/710] Linking static target lib/librte_reorder.a 00:02:43.123 [368/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:43.380 [369/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:43.380 [370/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:43.380 [371/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:43.380 [372/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:43.380 [373/710] Linking static target lib/librte_rib.a 00:02:43.380 [374/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:43.380 [375/710] Linking static target lib/librte_stack.a 00:02:43.380 [376/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.637 [377/710] Linking target lib/librte_reorder.so.24.0 00:02:43.637 [378/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:43.637 [379/710] Linking static target lib/librte_security.a 00:02:43.637 [380/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.637 [381/710] Linking target lib/librte_stack.so.24.0 00:02:43.637 [382/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:43.894 [383/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.894 [384/710] Linking target lib/librte_mldev.so.24.0 00:02:43.894 [385/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.894 [386/710] Linking target lib/librte_rib.so.24.0 00:02:43.894 [387/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.894 [388/710] Linking target lib/librte_security.so.24.0 00:02:44.152 [389/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:44.152 [390/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:44.152 [391/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:44.152 [392/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:44.409 [393/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:44.409 [394/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:44.409 [395/710] Linking static target lib/librte_sched.a 00:02:44.666 [396/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.924 [397/710] Linking target lib/librte_sched.so.24.0 00:02:44.924 [398/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:44.924 [399/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:44.924 [400/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:44.924 [401/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:45.181 [402/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:45.438 [403/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:45.438 [404/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:45.695 [405/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:45.696 [406/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:45.696 [407/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:45.953 [408/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:45.953 [409/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:45.953 [410/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:46.210 [411/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:46.210 [412/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:46.210 [413/710] Linking static target lib/librte_ipsec.a 00:02:46.470 [414/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:46.470 [415/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:46.470 [416/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.470 [417/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:46.470 [418/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:46.470 [419/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:46.733 [420/710] Linking target lib/librte_ipsec.so.24.0 00:02:46.733 [421/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:46.733 [422/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:46.733 [423/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:47.668 [424/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:47.668 [425/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:47.668 [426/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:47.668 [427/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:47.668 [428/710] Linking static target lib/librte_fib.a 00:02:47.668 [429/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:47.668 [430/710] Linking static target lib/librte_pdcp.a 00:02:47.668 [431/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:47.668 [432/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:47.926 [433/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.926 [434/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.926 [435/710] Linking target lib/librte_fib.so.24.0 00:02:47.926 [436/710] Linking target lib/librte_pdcp.so.24.0 00:02:48.184 [437/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:48.749 [438/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:48.749 [439/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:48.749 [440/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:48.749 [441/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:48.749 [442/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:49.006 [443/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:49.006 [444/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:49.006 [445/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:49.264 [446/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:49.265 [447/710] Linking static target lib/librte_port.a 00:02:49.522 [448/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:49.522 [449/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:49.522 [450/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:49.780 [451/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:49.780 [452/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:49.780 [453/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:50.039 [454/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.039 [455/710] Linking target lib/librte_port.so.24.0 00:02:50.039 [456/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:50.039 [457/710] Linking static target lib/librte_pdump.a 00:02:50.039 [458/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:50.297 [459/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:50.297 [460/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.297 [461/710] Linking target lib/librte_pdump.so.24.0 00:02:50.555 [462/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:50.555 [463/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:50.813 [464/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:50.813 [465/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:50.813 [466/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:50.813 [467/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:51.071 [468/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:51.071 [469/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:51.329 [470/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:51.329 [471/710] Linking static target lib/librte_table.a 00:02:51.329 [472/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:51.587 [473/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:51.845 [474/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.845 [475/710] Linking target lib/librte_table.so.24.0 00:02:52.103 [476/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:52.103 [477/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:52.103 [478/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:52.103 [479/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:52.361 [480/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:52.618 [481/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:52.875 [482/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:52.875 [483/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:52.875 [484/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:52.876 [485/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:53.133 [486/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:53.391 [487/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:53.391 [488/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:53.648 [489/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:53.648 [490/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:53.649 [491/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:53.649 [492/710] Linking static target lib/librte_graph.a 00:02:53.649 [493/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:54.214 [494/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:54.214 [495/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.214 [496/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:54.214 [497/710] Linking target lib/librte_graph.so.24.0 00:02:54.472 [498/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:54.472 [499/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:54.730 [500/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:54.989 [501/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:54.989 [502/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:54.989 [503/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:54.989 [504/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:54.989 [505/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:55.247 [506/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:55.247 [507/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:55.506 [508/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:55.506 [509/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:55.765 [510/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:55.765 [511/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:55.765 [512/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:55.765 [513/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:56.023 [514/710] Linking static target lib/librte_node.a 00:02:56.024 [515/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:56.281 [516/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.281 [517/710] Linking target lib/librte_node.so.24.0 00:02:56.281 [518/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:56.281 [519/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:56.281 [520/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:56.281 [521/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:56.539 [522/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:56.539 [523/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:56.539 [524/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:56.539 [525/710] Linking static target drivers/librte_bus_pci.a 00:02:56.539 [526/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:56.539 [527/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:56.539 [528/710] Linking static target drivers/librte_bus_vdev.a 00:02:56.797 [529/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:56.797 [530/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:56.797 [531/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.797 [532/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:56.797 [533/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:56.797 [534/710] Linking target drivers/librte_bus_vdev.so.24.0 00:02:57.055 [535/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:57.055 [536/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:57.055 [537/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:57.055 [538/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.055 [539/710] Linking target drivers/librte_bus_pci.so.24.0 00:02:57.055 [540/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:57.314 [541/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:57.314 [542/710] Linking static target drivers/librte_mempool_ring.a 00:02:57.314 [543/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:57.314 [544/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:57.314 [545/710] Linking target drivers/librte_mempool_ring.so.24.0 00:02:57.572 [546/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:57.829 [547/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:58.087 [548/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:58.087 [549/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:58.087 [550/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:58.087 [551/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:59.022 [552/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:59.022 [553/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:59.280 [554/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:59.280 [555/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:59.280 [556/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:59.280 [557/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:59.538 [558/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:59.796 [559/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:00.054 [560/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:00.054 [561/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:00.054 [562/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:03:00.621 [563/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:03:00.621 [564/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:03:00.621 [565/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:00.879 [566/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:03:01.137 [567/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:01.396 [568/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:03:01.396 [569/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:01.396 [570/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:03:01.396 [571/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:03:01.396 [572/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:03:01.396 [573/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:01.962 [574/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:03:01.962 [575/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:03:02.220 [576/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:03:02.220 [577/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:03:02.220 [578/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:02.220 [579/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:02.220 [580/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:03:02.478 [581/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:02.478 [582/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:02.736 [583/710] Linking static target drivers/librte_net_i40e.a 00:03:02.736 [584/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:02.736 [585/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:02.736 [586/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:02.994 [587/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:02.994 [588/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:02.994 [589/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:02.994 [590/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:02.994 [591/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:02.994 [592/710] Linking static target lib/librte_vhost.a 00:03:03.252 [593/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.510 [594/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:03.510 [595/710] Linking target drivers/librte_net_i40e.so.24.0 00:03:03.510 [596/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:03.768 [597/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:04.026 [598/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:04.285 [599/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:04.285 [600/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:04.285 [601/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.285 [602/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:04.285 [603/710] Linking target lib/librte_vhost.so.24.0 00:03:04.543 [604/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:04.543 [605/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:04.543 [606/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:04.543 [607/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:05.110 [608/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:05.110 [609/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:05.370 [610/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:05.370 [611/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:03:05.370 [612/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:05.370 [613/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:05.370 [614/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:05.628 [615/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:05.628 [616/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:05.628 [617/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:05.887 [618/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:06.145 [619/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:06.145 [620/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:03:06.404 [621/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:06.404 [622/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:06.404 [623/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:07.340 [624/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:07.597 [625/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:07.597 [626/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:07.597 [627/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:07.597 [628/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:07.597 [629/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:07.597 [630/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:07.854 [631/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:03:07.854 [632/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:07.854 [633/710] Linking static target lib/librte_pipeline.a 00:03:07.854 [634/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:03:07.854 [635/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:08.113 [636/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:08.113 [637/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:03:08.371 [638/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:08.371 [639/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:03:08.371 [640/710] Linking target app/dpdk-dumpcap 00:03:08.371 [641/710] Linking target app/dpdk-graph 00:03:08.936 [642/710] Linking target app/dpdk-pdump 00:03:08.936 [643/710] Linking target app/dpdk-test-acl 00:03:08.936 [644/710] Linking target app/dpdk-proc-info 00:03:08.936 [645/710] Linking target app/dpdk-test-cmdline 00:03:08.936 [646/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:08.936 [647/710] Linking target app/dpdk-test-crypto-perf 00:03:08.936 [648/710] Linking target app/dpdk-test-compress-perf 00:03:09.195 [649/710] Linking target app/dpdk-test-dma-perf 00:03:09.195 [650/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:09.454 [651/710] Linking target app/dpdk-test-gpudev 00:03:09.454 [652/710] Linking target app/dpdk-test-fib 00:03:09.454 [653/710] Linking target app/dpdk-test-flow-perf 00:03:09.713 [654/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:03:09.713 [655/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:03:09.713 [656/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:03:09.713 [657/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:09.971 [658/710] Linking target app/dpdk-test-eventdev 00:03:09.971 [659/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:03:09.971 [660/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:03:10.230 [661/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:03:10.230 [662/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:03:10.230 [663/710] Linking target app/dpdk-test-bbdev 00:03:10.230 [664/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:10.489 [665/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:10.489 [666/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:10.747 [667/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:10.747 [668/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.747 [669/710] Linking target lib/librte_pipeline.so.24.0 00:03:11.005 [670/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:11.005 [671/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:11.005 [672/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:11.005 [673/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:11.262 [674/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:11.262 [675/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:11.519 [676/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:03:11.520 [677/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:03:11.776 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:11.776 [679/710] Linking target app/dpdk-test-pipeline 00:03:12.033 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:12.033 [681/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:12.033 [682/710] Linking target app/dpdk-test-mldev 00:03:12.290 [683/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:12.548 [684/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:12.548 [685/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:12.805 [686/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:12.805 [687/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:12.805 [688/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:13.062 [689/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:13.320 [690/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:13.320 [691/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:03:13.320 [692/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:13.320 [693/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:13.885 [694/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:14.143 [695/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:14.143 [696/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:14.143 [697/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:14.400 [698/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:14.658 [699/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:14.658 [700/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:14.658 [701/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:14.658 [702/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:14.916 [703/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:14.916 [704/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:14.916 [705/710] Linking target app/dpdk-test-regex 00:03:15.173 [706/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:15.173 [707/710] Linking target app/dpdk-test-sad 00:03:15.432 [708/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:15.691 [709/710] Linking target app/dpdk-testpmd 00:03:15.952 [710/710] Linking target app/dpdk-test-security-perf 00:03:15.952 09:55:46 -- common/autobuild_common.sh@187 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:15.952 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:16.210 [0/1] Installing files. 00:03:16.472 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.472 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:16.473 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-macsec/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:16.474 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ipsec_sa.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/rss.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/efd_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:16.475 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/commands.list to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:16.476 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:16.476 Installing lib/librte_log.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.476 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_dispatcher.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_mldev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_pdcp.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.477 Installing lib/librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.749 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.749 Installing lib/librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.749 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.749 Installing drivers/librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:16.749 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.749 Installing drivers/librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:16.749 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.749 Installing drivers/librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:16.749 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:16.749 Installing drivers/librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0 00:03:16.749 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.749 Installing app/dpdk-graph to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.749 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.749 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.749 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.749 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.749 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.749 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.749 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.749 Installing app/dpdk-test-dma-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.749 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.749 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.749 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.749 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.749 Installing app/dpdk-test-mldev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.749 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.749 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.749 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.749 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.749 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:16.749 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.749 Installing /home/vagrant/spdk_repo/dpdk/lib/log/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.749 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.749 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.749 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.749 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.749 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lock_annotations.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_stdatomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_dtls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:16.750 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.010 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.010 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.010 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.010 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.010 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_pdcp_hdr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.010 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.010 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.010 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.010 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.010 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.010 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.010 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.010 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.010 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.010 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.010 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.010 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.010 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_dma_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/dispatcher/rte_dispatcher.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/mldev/rte_mldev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.011 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/pdcp/rte_pdcp_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_model_rtc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip6_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_udp4_input_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/buildtools/dpdk-cmdline-gen.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-rss-flows.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:17.012 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:17.012 Installing symlink pointing to librte_log.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so.24 00:03:17.012 Installing symlink pointing to librte_log.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_log.so 00:03:17.012 Installing symlink pointing to librte_kvargs.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.24 00:03:17.012 Installing symlink pointing to librte_kvargs.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:17.012 Installing symlink pointing to librte_telemetry.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.24 00:03:17.012 Installing symlink pointing to librte_telemetry.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:17.012 Installing symlink pointing to librte_eal.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.24 00:03:17.012 Installing symlink pointing to librte_eal.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:17.012 Installing symlink pointing to librte_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.24 00:03:17.012 Installing symlink pointing to librte_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:17.012 Installing symlink pointing to librte_rcu.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.24 00:03:17.012 Installing symlink pointing to librte_rcu.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:17.012 Installing symlink pointing to librte_mempool.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.24 00:03:17.012 Installing symlink pointing to librte_mempool.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:17.012 Installing symlink pointing to librte_mbuf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.24 00:03:17.012 Installing symlink pointing to librte_mbuf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:17.012 Installing symlink pointing to librte_net.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.24 00:03:17.012 Installing symlink pointing to librte_net.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:17.012 Installing symlink pointing to librte_meter.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.24 00:03:17.012 Installing symlink pointing to librte_meter.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:17.012 Installing symlink pointing to librte_ethdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.24 00:03:17.012 Installing symlink pointing to librte_ethdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:17.012 Installing symlink pointing to librte_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.24 00:03:17.012 Installing symlink pointing to librte_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:17.012 Installing symlink pointing to librte_cmdline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.24 00:03:17.012 Installing symlink pointing to librte_cmdline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:17.012 Installing symlink pointing to librte_metrics.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.24 00:03:17.012 Installing symlink pointing to librte_metrics.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:17.012 Installing symlink pointing to librte_hash.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.24 00:03:17.012 Installing symlink pointing to librte_hash.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:17.012 Installing symlink pointing to librte_timer.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.24 00:03:17.012 Installing symlink pointing to librte_timer.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:17.012 Installing symlink pointing to librte_acl.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.24 00:03:17.012 Installing symlink pointing to librte_acl.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:17.012 Installing symlink pointing to librte_bbdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.24 00:03:17.012 Installing symlink pointing to librte_bbdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:17.012 Installing symlink pointing to librte_bitratestats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.24 00:03:17.012 Installing symlink pointing to librte_bitratestats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:17.012 Installing symlink pointing to librte_bpf.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.24 00:03:17.013 Installing symlink pointing to librte_bpf.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:17.013 Installing symlink pointing to librte_cfgfile.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.24 00:03:17.013 Installing symlink pointing to librte_cfgfile.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:17.013 Installing symlink pointing to librte_compressdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.24 00:03:17.013 Installing symlink pointing to librte_compressdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:17.013 Installing symlink pointing to librte_cryptodev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.24 00:03:17.013 Installing symlink pointing to librte_cryptodev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:17.013 Installing symlink pointing to librte_distributor.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.24 00:03:17.013 Installing symlink pointing to librte_distributor.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:17.013 Installing symlink pointing to librte_dmadev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.24 00:03:17.013 Installing symlink pointing to librte_dmadev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:17.013 Installing symlink pointing to librte_efd.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.24 00:03:17.013 Installing symlink pointing to librte_efd.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:17.013 Installing symlink pointing to librte_eventdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.24 00:03:17.013 Installing symlink pointing to librte_eventdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:17.013 Installing symlink pointing to librte_dispatcher.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so.24 00:03:17.013 Installing symlink pointing to librte_dispatcher.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dispatcher.so 00:03:17.013 Installing symlink pointing to librte_gpudev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.24 00:03:17.013 Installing symlink pointing to librte_gpudev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:17.013 Installing symlink pointing to librte_gro.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.24 00:03:17.013 Installing symlink pointing to librte_gro.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:17.013 Installing symlink pointing to librte_gso.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.24 00:03:17.013 Installing symlink pointing to librte_gso.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:17.013 Installing symlink pointing to librte_ip_frag.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.24 00:03:17.013 Installing symlink pointing to librte_ip_frag.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:17.013 Installing symlink pointing to librte_jobstats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.24 00:03:17.013 Installing symlink pointing to librte_jobstats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:17.013 Installing symlink pointing to librte_latencystats.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.24 00:03:17.013 Installing symlink pointing to librte_latencystats.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:17.013 Installing symlink pointing to librte_lpm.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.24 00:03:17.013 Installing symlink pointing to librte_lpm.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:17.013 Installing symlink pointing to librte_member.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.24 00:03:17.013 Installing symlink pointing to librte_member.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:17.013 Installing symlink pointing to librte_pcapng.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.24 00:03:17.013 Installing symlink pointing to librte_pcapng.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:17.013 Installing symlink pointing to librte_power.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.24 00:03:17.013 Installing symlink pointing to librte_power.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:17.013 Installing symlink pointing to librte_rawdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.24 00:03:17.013 Installing symlink pointing to librte_rawdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:17.013 Installing symlink pointing to librte_regexdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.24 00:03:17.013 Installing symlink pointing to librte_regexdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:17.013 Installing symlink pointing to librte_mldev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so.24 00:03:17.013 Installing symlink pointing to librte_mldev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mldev.so 00:03:17.013 Installing symlink pointing to librte_rib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.24 00:03:17.013 Installing symlink pointing to librte_rib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:17.013 Installing symlink pointing to librte_reorder.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.24 00:03:17.013 Installing symlink pointing to librte_reorder.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:17.013 Installing symlink pointing to librte_sched.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.24 00:03:17.013 Installing symlink pointing to librte_sched.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:17.013 Installing symlink pointing to librte_security.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.24 00:03:17.013 Installing symlink pointing to librte_security.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:17.013 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:03:17.013 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:03:17.013 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:03:17.013 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:03:17.013 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:03:17.013 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:03:17.013 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:03:17.013 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:03:17.013 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:03:17.013 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:03:17.013 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:03:17.013 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:03:17.013 Installing symlink pointing to librte_stack.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.24 00:03:17.013 Installing symlink pointing to librte_stack.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:17.013 Installing symlink pointing to librte_vhost.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.24 00:03:17.013 Installing symlink pointing to librte_vhost.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:17.013 Installing symlink pointing to librte_ipsec.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.24 00:03:17.013 Installing symlink pointing to librte_ipsec.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:17.013 Installing symlink pointing to librte_pdcp.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so.24 00:03:17.013 Installing symlink pointing to librte_pdcp.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdcp.so 00:03:17.013 Installing symlink pointing to librte_fib.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.24 00:03:17.013 Installing symlink pointing to librte_fib.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:17.013 Installing symlink pointing to librte_port.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.24 00:03:17.013 Installing symlink pointing to librte_port.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:17.013 Installing symlink pointing to librte_pdump.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.24 00:03:17.013 Installing symlink pointing to librte_pdump.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:17.013 Installing symlink pointing to librte_table.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.24 00:03:17.013 Installing symlink pointing to librte_table.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:17.013 Installing symlink pointing to librte_pipeline.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.24 00:03:17.013 Installing symlink pointing to librte_pipeline.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:17.013 Installing symlink pointing to librte_graph.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.24 00:03:17.013 Installing symlink pointing to librte_graph.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:17.013 Installing symlink pointing to librte_node.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.24 00:03:17.013 Installing symlink pointing to librte_node.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:17.013 Installing symlink pointing to librte_bus_pci.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:03:17.013 Installing symlink pointing to librte_bus_pci.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:03:17.013 Installing symlink pointing to librte_bus_vdev.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:03:17.013 Installing symlink pointing to librte_bus_vdev.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:03:17.013 Installing symlink pointing to librte_mempool_ring.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:03:17.013 Installing symlink pointing to librte_mempool_ring.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:03:17.013 Installing symlink pointing to librte_net_i40e.so.24.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:03:17.013 Installing symlink pointing to librte_net_i40e.so.24 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:03:17.013 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:03:17.013 09:55:47 -- common/autobuild_common.sh@189 -- $ uname -s 00:03:17.013 09:55:47 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:17.013 09:55:47 -- common/autobuild_common.sh@200 -- $ cat 00:03:17.013 09:55:47 -- common/autobuild_common.sh@205 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:17.013 00:03:17.013 real 1m2.389s 00:03:17.013 user 7m36.828s 00:03:17.013 sys 1m14.241s 00:03:17.013 09:55:47 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:17.013 09:55:47 -- common/autotest_common.sh@10 -- $ set +x 00:03:17.013 ************************************ 00:03:17.013 END TEST build_native_dpdk 00:03:17.013 ************************************ 00:03:17.013 09:55:47 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:17.013 09:55:47 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:17.013 09:55:47 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:17.013 09:55:47 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:17.013 09:55:47 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:17.013 09:55:47 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:17.013 09:55:47 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:17.013 09:55:47 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang --with-shared 00:03:17.271 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:17.271 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:17.271 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:17.271 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:17.836 Using 'verbs' RDMA provider 00:03:33.308 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:45.512 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:45.512 go version go1.21.1 linux/amd64 00:03:45.512 Creating mk/config.mk...done. 00:03:45.512 Creating mk/cc.flags.mk...done. 00:03:45.512 Type 'make' to build. 00:03:45.512 09:56:15 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:45.512 09:56:15 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:03:45.512 09:56:15 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:03:45.512 09:56:15 -- common/autotest_common.sh@10 -- $ set +x 00:03:45.512 ************************************ 00:03:45.512 START TEST make 00:03:45.512 ************************************ 00:03:45.512 09:56:15 -- common/autotest_common.sh@1104 -- $ make -j10 00:03:45.771 make[1]: Nothing to be done for 'all'. 00:04:12.306 CC lib/ut_mock/mock.o 00:04:12.306 CC lib/log/log_flags.o 00:04:12.306 CC lib/log/log.o 00:04:12.306 CC lib/log/log_deprecated.o 00:04:12.306 CC lib/ut/ut.o 00:04:12.307 LIB libspdk_ut_mock.a 00:04:12.307 SO libspdk_ut_mock.so.5.0 00:04:12.307 LIB libspdk_log.a 00:04:12.307 LIB libspdk_ut.a 00:04:12.307 SYMLINK libspdk_ut_mock.so 00:04:12.307 SO libspdk_log.so.6.1 00:04:12.307 SO libspdk_ut.so.1.0 00:04:12.307 SYMLINK libspdk_ut.so 00:04:12.307 SYMLINK libspdk_log.so 00:04:12.307 CC lib/ioat/ioat.o 00:04:12.307 CC lib/dma/dma.o 00:04:12.307 CXX lib/trace_parser/trace.o 00:04:12.307 CC lib/util/base64.o 00:04:12.307 CC lib/util/bit_array.o 00:04:12.307 CC lib/util/cpuset.o 00:04:12.307 CC lib/util/crc16.o 00:04:12.307 CC lib/util/crc32.o 00:04:12.307 CC lib/util/crc32c.o 00:04:12.307 CC lib/vfio_user/host/vfio_user_pci.o 00:04:12.307 CC lib/util/crc32_ieee.o 00:04:12.307 CC lib/vfio_user/host/vfio_user.o 00:04:12.307 CC lib/util/crc64.o 00:04:12.307 LIB libspdk_dma.a 00:04:12.307 CC lib/util/dif.o 00:04:12.307 SO libspdk_dma.so.3.0 00:04:12.307 CC lib/util/fd.o 00:04:12.307 CC lib/util/file.o 00:04:12.307 LIB libspdk_ioat.a 00:04:12.307 SO libspdk_ioat.so.6.0 00:04:12.307 SYMLINK libspdk_dma.so 00:04:12.307 CC lib/util/hexlify.o 00:04:12.307 CC lib/util/iov.o 00:04:12.307 CC lib/util/math.o 00:04:12.307 SYMLINK libspdk_ioat.so 00:04:12.307 CC lib/util/pipe.o 00:04:12.307 CC lib/util/strerror_tls.o 00:04:12.307 LIB libspdk_vfio_user.a 00:04:12.307 CC lib/util/string.o 00:04:12.307 CC lib/util/uuid.o 00:04:12.307 SO libspdk_vfio_user.so.4.0 00:04:12.307 CC lib/util/fd_group.o 00:04:12.307 CC lib/util/xor.o 00:04:12.307 SYMLINK libspdk_vfio_user.so 00:04:12.307 CC lib/util/zipf.o 00:04:12.307 LIB libspdk_util.a 00:04:12.307 LIB libspdk_trace_parser.a 00:04:12.307 SO libspdk_util.so.8.0 00:04:12.307 SO libspdk_trace_parser.so.4.0 00:04:12.307 SYMLINK libspdk_trace_parser.so 00:04:12.307 SYMLINK libspdk_util.so 00:04:12.307 CC lib/idxd/idxd.o 00:04:12.307 CC lib/rdma/common.o 00:04:12.307 CC lib/rdma/rdma_verbs.o 00:04:12.307 CC lib/idxd/idxd_user.o 00:04:12.307 CC lib/conf/conf.o 00:04:12.307 CC lib/idxd/idxd_kernel.o 00:04:12.307 CC lib/env_dpdk/env.o 00:04:12.307 CC lib/vmd/vmd.o 00:04:12.307 CC lib/vmd/led.o 00:04:12.307 CC lib/json/json_parse.o 00:04:12.307 CC lib/json/json_util.o 00:04:12.307 CC lib/json/json_write.o 00:04:12.307 CC lib/env_dpdk/memory.o 00:04:12.307 CC lib/env_dpdk/pci.o 00:04:12.307 LIB libspdk_conf.a 00:04:12.307 CC lib/env_dpdk/init.o 00:04:12.307 LIB libspdk_rdma.a 00:04:12.307 SO libspdk_conf.so.5.0 00:04:12.307 SO libspdk_rdma.so.5.0 00:04:12.307 SYMLINK libspdk_conf.so 00:04:12.307 CC lib/env_dpdk/threads.o 00:04:12.307 SYMLINK libspdk_rdma.so 00:04:12.307 CC lib/env_dpdk/pci_ioat.o 00:04:12.307 CC lib/env_dpdk/pci_virtio.o 00:04:12.307 CC lib/env_dpdk/pci_vmd.o 00:04:12.307 LIB libspdk_json.a 00:04:12.307 CC lib/env_dpdk/pci_idxd.o 00:04:12.307 SO libspdk_json.so.5.1 00:04:12.307 CC lib/env_dpdk/pci_event.o 00:04:12.307 CC lib/env_dpdk/sigbus_handler.o 00:04:12.307 LIB libspdk_idxd.a 00:04:12.307 SO libspdk_idxd.so.11.0 00:04:12.307 SYMLINK libspdk_json.so 00:04:12.307 CC lib/env_dpdk/pci_dpdk.o 00:04:12.307 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:12.307 LIB libspdk_vmd.a 00:04:12.307 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:12.307 SO libspdk_vmd.so.5.0 00:04:12.307 SYMLINK libspdk_idxd.so 00:04:12.307 SYMLINK libspdk_vmd.so 00:04:12.307 CC lib/jsonrpc/jsonrpc_server.o 00:04:12.307 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:12.307 CC lib/jsonrpc/jsonrpc_client.o 00:04:12.307 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:12.307 LIB libspdk_jsonrpc.a 00:04:12.307 SO libspdk_jsonrpc.so.5.1 00:04:12.307 SYMLINK libspdk_jsonrpc.so 00:04:12.307 LIB libspdk_env_dpdk.a 00:04:12.307 CC lib/rpc/rpc.o 00:04:12.307 SO libspdk_env_dpdk.so.13.0 00:04:12.307 SYMLINK libspdk_env_dpdk.so 00:04:12.307 LIB libspdk_rpc.a 00:04:12.307 SO libspdk_rpc.so.5.0 00:04:12.307 SYMLINK libspdk_rpc.so 00:04:12.565 CC lib/notify/notify.o 00:04:12.565 CC lib/notify/notify_rpc.o 00:04:12.565 CC lib/sock/sock.o 00:04:12.565 CC lib/trace/trace.o 00:04:12.565 CC lib/sock/sock_rpc.o 00:04:12.565 CC lib/trace/trace_rpc.o 00:04:12.565 CC lib/trace/trace_flags.o 00:04:12.823 LIB libspdk_notify.a 00:04:12.823 SO libspdk_notify.so.5.0 00:04:12.823 LIB libspdk_trace.a 00:04:12.823 SYMLINK libspdk_notify.so 00:04:12.823 SO libspdk_trace.so.9.0 00:04:13.081 SYMLINK libspdk_trace.so 00:04:13.081 LIB libspdk_sock.a 00:04:13.081 SO libspdk_sock.so.8.0 00:04:13.081 SYMLINK libspdk_sock.so 00:04:13.081 CC lib/thread/thread.o 00:04:13.081 CC lib/thread/iobuf.o 00:04:13.339 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:13.339 CC lib/nvme/nvme_ctrlr.o 00:04:13.339 CC lib/nvme/nvme_fabric.o 00:04:13.339 CC lib/nvme/nvme_ns_cmd.o 00:04:13.339 CC lib/nvme/nvme_ns.o 00:04:13.339 CC lib/nvme/nvme_pcie.o 00:04:13.339 CC lib/nvme/nvme_pcie_common.o 00:04:13.339 CC lib/nvme/nvme_qpair.o 00:04:13.597 CC lib/nvme/nvme.o 00:04:14.163 CC lib/nvme/nvme_quirks.o 00:04:14.163 CC lib/nvme/nvme_transport.o 00:04:14.163 CC lib/nvme/nvme_discovery.o 00:04:14.163 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:14.163 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:14.163 CC lib/nvme/nvme_tcp.o 00:04:14.421 CC lib/nvme/nvme_opal.o 00:04:14.421 CC lib/nvme/nvme_io_msg.o 00:04:14.680 CC lib/nvme/nvme_poll_group.o 00:04:14.680 LIB libspdk_thread.a 00:04:14.680 CC lib/nvme/nvme_zns.o 00:04:14.680 CC lib/nvme/nvme_cuse.o 00:04:14.680 SO libspdk_thread.so.9.0 00:04:14.938 CC lib/nvme/nvme_vfio_user.o 00:04:14.938 SYMLINK libspdk_thread.so 00:04:14.938 CC lib/nvme/nvme_rdma.o 00:04:14.938 CC lib/accel/accel.o 00:04:14.938 CC lib/blob/blobstore.o 00:04:15.196 CC lib/accel/accel_rpc.o 00:04:15.196 CC lib/accel/accel_sw.o 00:04:15.454 CC lib/init/json_config.o 00:04:15.454 CC lib/virtio/virtio.o 00:04:15.454 CC lib/virtio/virtio_vhost_user.o 00:04:15.454 CC lib/virtio/virtio_vfio_user.o 00:04:15.454 CC lib/virtio/virtio_pci.o 00:04:15.712 CC lib/init/subsystem.o 00:04:15.712 CC lib/init/subsystem_rpc.o 00:04:15.712 CC lib/init/rpc.o 00:04:15.712 CC lib/blob/request.o 00:04:15.712 CC lib/blob/zeroes.o 00:04:15.712 CC lib/blob/blob_bs_dev.o 00:04:15.970 LIB libspdk_virtio.a 00:04:15.970 LIB libspdk_init.a 00:04:15.970 SO libspdk_virtio.so.6.0 00:04:15.970 SO libspdk_init.so.4.0 00:04:15.970 SYMLINK libspdk_virtio.so 00:04:15.970 LIB libspdk_accel.a 00:04:15.970 SYMLINK libspdk_init.so 00:04:15.970 SO libspdk_accel.so.14.0 00:04:16.229 SYMLINK libspdk_accel.so 00:04:16.229 CC lib/event/app.o 00:04:16.229 CC lib/event/log_rpc.o 00:04:16.229 CC lib/event/reactor.o 00:04:16.229 CC lib/event/app_rpc.o 00:04:16.229 CC lib/event/scheduler_static.o 00:04:16.229 LIB libspdk_nvme.a 00:04:16.229 CC lib/bdev/bdev.o 00:04:16.229 CC lib/bdev/bdev_zone.o 00:04:16.229 CC lib/bdev/bdev_rpc.o 00:04:16.229 CC lib/bdev/part.o 00:04:16.229 CC lib/bdev/scsi_nvme.o 00:04:16.488 SO libspdk_nvme.so.12.0 00:04:16.488 LIB libspdk_event.a 00:04:16.746 SO libspdk_event.so.12.0 00:04:16.746 SYMLINK libspdk_event.so 00:04:16.746 SYMLINK libspdk_nvme.so 00:04:17.679 LIB libspdk_blob.a 00:04:17.679 SO libspdk_blob.so.10.1 00:04:17.937 SYMLINK libspdk_blob.so 00:04:18.195 CC lib/lvol/lvol.o 00:04:18.195 CC lib/blobfs/blobfs.o 00:04:18.195 CC lib/blobfs/tree.o 00:04:18.760 LIB libspdk_bdev.a 00:04:18.760 SO libspdk_bdev.so.14.0 00:04:19.018 LIB libspdk_blobfs.a 00:04:19.018 LIB libspdk_lvol.a 00:04:19.018 SYMLINK libspdk_bdev.so 00:04:19.018 SO libspdk_lvol.so.9.1 00:04:19.018 SO libspdk_blobfs.so.9.0 00:04:19.018 SYMLINK libspdk_lvol.so 00:04:19.018 SYMLINK libspdk_blobfs.so 00:04:19.018 CC lib/nvmf/ctrlr.o 00:04:19.018 CC lib/nvmf/ctrlr_discovery.o 00:04:19.018 CC lib/nvmf/ctrlr_bdev.o 00:04:19.018 CC lib/nvmf/subsystem.o 00:04:19.018 CC lib/nvmf/nvmf_rpc.o 00:04:19.018 CC lib/nvmf/nvmf.o 00:04:19.018 CC lib/ublk/ublk.o 00:04:19.018 CC lib/ftl/ftl_core.o 00:04:19.018 CC lib/scsi/dev.o 00:04:19.018 CC lib/nbd/nbd.o 00:04:19.276 CC lib/scsi/lun.o 00:04:19.534 CC lib/ftl/ftl_init.o 00:04:19.534 CC lib/nbd/nbd_rpc.o 00:04:19.534 CC lib/nvmf/transport.o 00:04:19.792 CC lib/ftl/ftl_layout.o 00:04:19.792 LIB libspdk_nbd.a 00:04:19.792 CC lib/scsi/port.o 00:04:19.792 CC lib/ublk/ublk_rpc.o 00:04:19.792 SO libspdk_nbd.so.6.0 00:04:19.792 CC lib/nvmf/tcp.o 00:04:19.792 SYMLINK libspdk_nbd.so 00:04:19.792 CC lib/scsi/scsi.o 00:04:19.792 CC lib/nvmf/rdma.o 00:04:19.792 LIB libspdk_ublk.a 00:04:20.050 CC lib/scsi/scsi_bdev.o 00:04:20.050 SO libspdk_ublk.so.2.0 00:04:20.050 CC lib/scsi/scsi_pr.o 00:04:20.050 CC lib/scsi/scsi_rpc.o 00:04:20.050 CC lib/ftl/ftl_debug.o 00:04:20.050 SYMLINK libspdk_ublk.so 00:04:20.050 CC lib/ftl/ftl_io.o 00:04:20.050 CC lib/scsi/task.o 00:04:20.307 CC lib/ftl/ftl_sb.o 00:04:20.307 CC lib/ftl/ftl_l2p.o 00:04:20.307 CC lib/ftl/ftl_l2p_flat.o 00:04:20.307 CC lib/ftl/ftl_nv_cache.o 00:04:20.307 CC lib/ftl/ftl_band.o 00:04:20.307 CC lib/ftl/ftl_band_ops.o 00:04:20.307 LIB libspdk_scsi.a 00:04:20.307 CC lib/ftl/ftl_writer.o 00:04:20.564 CC lib/ftl/ftl_rq.o 00:04:20.564 CC lib/ftl/ftl_reloc.o 00:04:20.564 SO libspdk_scsi.so.8.0 00:04:20.564 SYMLINK libspdk_scsi.so 00:04:20.564 CC lib/ftl/ftl_l2p_cache.o 00:04:20.564 CC lib/ftl/ftl_p2l.o 00:04:20.822 CC lib/ftl/mngt/ftl_mngt.o 00:04:20.822 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:20.822 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:20.822 CC lib/iscsi/conn.o 00:04:20.822 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:20.822 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:21.091 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:21.091 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:21.091 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:21.091 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:21.091 CC lib/vhost/vhost.o 00:04:21.091 CC lib/vhost/vhost_rpc.o 00:04:21.374 CC lib/vhost/vhost_scsi.o 00:04:21.374 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:21.374 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:21.374 CC lib/vhost/vhost_blk.o 00:04:21.374 CC lib/vhost/rte_vhost_user.o 00:04:21.374 CC lib/iscsi/init_grp.o 00:04:21.374 CC lib/iscsi/iscsi.o 00:04:21.374 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:21.374 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:21.633 CC lib/ftl/utils/ftl_conf.o 00:04:21.633 CC lib/iscsi/md5.o 00:04:21.892 CC lib/ftl/utils/ftl_md.o 00:04:21.892 CC lib/ftl/utils/ftl_mempool.o 00:04:21.892 CC lib/ftl/utils/ftl_bitmap.o 00:04:21.892 LIB libspdk_nvmf.a 00:04:21.892 CC lib/ftl/utils/ftl_property.o 00:04:21.892 CC lib/iscsi/param.o 00:04:21.892 SO libspdk_nvmf.so.17.0 00:04:22.150 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:22.150 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:22.150 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:22.150 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:22.150 SYMLINK libspdk_nvmf.so 00:04:22.150 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:22.150 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:22.409 CC lib/iscsi/portal_grp.o 00:04:22.409 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:22.409 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:22.409 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:22.409 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:22.409 CC lib/ftl/base/ftl_base_dev.o 00:04:22.409 CC lib/ftl/base/ftl_base_bdev.o 00:04:22.409 CC lib/iscsi/tgt_node.o 00:04:22.409 LIB libspdk_vhost.a 00:04:22.409 SO libspdk_vhost.so.7.1 00:04:22.409 CC lib/ftl/ftl_trace.o 00:04:22.409 CC lib/iscsi/iscsi_subsystem.o 00:04:22.668 CC lib/iscsi/iscsi_rpc.o 00:04:22.668 CC lib/iscsi/task.o 00:04:22.668 SYMLINK libspdk_vhost.so 00:04:22.668 LIB libspdk_ftl.a 00:04:22.925 LIB libspdk_iscsi.a 00:04:22.925 SO libspdk_ftl.so.8.0 00:04:23.182 SO libspdk_iscsi.so.7.0 00:04:23.182 SYMLINK libspdk_iscsi.so 00:04:23.440 SYMLINK libspdk_ftl.so 00:04:23.440 CC module/env_dpdk/env_dpdk_rpc.o 00:04:23.698 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:23.698 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:23.698 CC module/accel/error/accel_error.o 00:04:23.698 CC module/accel/ioat/accel_ioat.o 00:04:23.699 CC module/blob/bdev/blob_bdev.o 00:04:23.699 CC module/scheduler/gscheduler/gscheduler.o 00:04:23.699 CC module/sock/posix/posix.o 00:04:23.699 CC module/accel/iaa/accel_iaa.o 00:04:23.699 CC module/accel/dsa/accel_dsa.o 00:04:23.699 LIB libspdk_env_dpdk_rpc.a 00:04:23.699 SO libspdk_env_dpdk_rpc.so.5.0 00:04:23.699 LIB libspdk_scheduler_dpdk_governor.a 00:04:23.699 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:23.699 LIB libspdk_scheduler_gscheduler.a 00:04:23.956 CC module/accel/ioat/accel_ioat_rpc.o 00:04:23.956 LIB libspdk_scheduler_dynamic.a 00:04:23.956 CC module/accel/error/accel_error_rpc.o 00:04:23.956 CC module/accel/iaa/accel_iaa_rpc.o 00:04:23.956 SO libspdk_scheduler_gscheduler.so.3.0 00:04:23.956 SO libspdk_scheduler_dynamic.so.3.0 00:04:23.956 SYMLINK libspdk_env_dpdk_rpc.so 00:04:23.956 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:23.956 CC module/accel/dsa/accel_dsa_rpc.o 00:04:23.956 SYMLINK libspdk_scheduler_dynamic.so 00:04:23.956 SYMLINK libspdk_scheduler_gscheduler.so 00:04:23.956 LIB libspdk_blob_bdev.a 00:04:23.956 LIB libspdk_accel_ioat.a 00:04:23.956 LIB libspdk_accel_error.a 00:04:23.956 SO libspdk_blob_bdev.so.10.1 00:04:23.956 LIB libspdk_accel_iaa.a 00:04:23.956 SO libspdk_accel_ioat.so.5.0 00:04:23.956 SO libspdk_accel_error.so.1.0 00:04:23.956 SO libspdk_accel_iaa.so.2.0 00:04:23.956 LIB libspdk_accel_dsa.a 00:04:23.956 SYMLINK libspdk_blob_bdev.so 00:04:23.956 SYMLINK libspdk_accel_ioat.so 00:04:24.214 SO libspdk_accel_dsa.so.4.0 00:04:24.214 SYMLINK libspdk_accel_error.so 00:04:24.214 SYMLINK libspdk_accel_iaa.so 00:04:24.214 SYMLINK libspdk_accel_dsa.so 00:04:24.214 CC module/bdev/lvol/vbdev_lvol.o 00:04:24.214 CC module/bdev/gpt/gpt.o 00:04:24.214 CC module/bdev/error/vbdev_error.o 00:04:24.214 CC module/bdev/nvme/bdev_nvme.o 00:04:24.214 CC module/bdev/delay/vbdev_delay.o 00:04:24.214 CC module/bdev/null/bdev_null.o 00:04:24.214 CC module/bdev/malloc/bdev_malloc.o 00:04:24.214 CC module/blobfs/bdev/blobfs_bdev.o 00:04:24.214 CC module/bdev/passthru/vbdev_passthru.o 00:04:24.472 LIB libspdk_sock_posix.a 00:04:24.472 SO libspdk_sock_posix.so.5.0 00:04:24.472 CC module/bdev/gpt/vbdev_gpt.o 00:04:24.472 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:24.472 CC module/bdev/error/vbdev_error_rpc.o 00:04:24.472 SYMLINK libspdk_sock_posix.so 00:04:24.472 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:24.472 CC module/bdev/null/bdev_null_rpc.o 00:04:24.730 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:24.730 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:24.730 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:24.730 LIB libspdk_bdev_error.a 00:04:24.730 LIB libspdk_blobfs_bdev.a 00:04:24.730 LIB libspdk_bdev_null.a 00:04:24.730 LIB libspdk_bdev_gpt.a 00:04:24.730 SO libspdk_blobfs_bdev.so.5.0 00:04:24.730 SO libspdk_bdev_error.so.5.0 00:04:24.730 SO libspdk_bdev_gpt.so.5.0 00:04:24.730 SO libspdk_bdev_null.so.5.0 00:04:24.731 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:24.731 LIB libspdk_bdev_passthru.a 00:04:24.731 LIB libspdk_bdev_malloc.a 00:04:24.731 LIB libspdk_bdev_delay.a 00:04:24.731 SYMLINK libspdk_blobfs_bdev.so 00:04:24.731 SYMLINK libspdk_bdev_error.so 00:04:24.989 SYMLINK libspdk_bdev_null.so 00:04:24.989 CC module/bdev/nvme/nvme_rpc.o 00:04:24.989 SYMLINK libspdk_bdev_gpt.so 00:04:24.989 SO libspdk_bdev_passthru.so.5.0 00:04:24.989 SO libspdk_bdev_malloc.so.5.0 00:04:24.989 SO libspdk_bdev_delay.so.5.0 00:04:24.989 CC module/bdev/nvme/bdev_mdns_client.o 00:04:24.989 SYMLINK libspdk_bdev_malloc.so 00:04:24.989 SYMLINK libspdk_bdev_passthru.so 00:04:24.989 CC module/bdev/nvme/vbdev_opal.o 00:04:24.989 SYMLINK libspdk_bdev_delay.so 00:04:24.989 CC module/bdev/raid/bdev_raid.o 00:04:24.989 CC module/bdev/split/vbdev_split.o 00:04:24.989 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:24.989 CC module/bdev/aio/bdev_aio.o 00:04:25.246 CC module/bdev/aio/bdev_aio_rpc.o 00:04:25.246 LIB libspdk_bdev_lvol.a 00:04:25.246 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:25.246 SO libspdk_bdev_lvol.so.5.0 00:04:25.246 CC module/bdev/split/vbdev_split_rpc.o 00:04:25.246 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:25.246 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:25.246 SYMLINK libspdk_bdev_lvol.so 00:04:25.246 CC module/bdev/raid/bdev_raid_rpc.o 00:04:25.246 CC module/bdev/raid/bdev_raid_sb.o 00:04:25.504 CC module/bdev/ftl/bdev_ftl.o 00:04:25.504 LIB libspdk_bdev_split.a 00:04:25.504 SO libspdk_bdev_split.so.5.0 00:04:25.504 LIB libspdk_bdev_zone_block.a 00:04:25.504 LIB libspdk_bdev_aio.a 00:04:25.504 CC module/bdev/raid/raid0.o 00:04:25.504 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:25.504 SO libspdk_bdev_aio.so.5.0 00:04:25.504 SO libspdk_bdev_zone_block.so.5.0 00:04:25.504 SYMLINK libspdk_bdev_split.so 00:04:25.504 CC module/bdev/raid/raid1.o 00:04:25.504 SYMLINK libspdk_bdev_aio.so 00:04:25.504 SYMLINK libspdk_bdev_zone_block.so 00:04:25.504 CC module/bdev/raid/concat.o 00:04:25.762 CC module/bdev/iscsi/bdev_iscsi.o 00:04:25.762 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:25.762 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:25.762 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:25.762 LIB libspdk_bdev_ftl.a 00:04:25.762 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:25.762 SO libspdk_bdev_ftl.so.5.0 00:04:25.762 SYMLINK libspdk_bdev_ftl.so 00:04:25.762 LIB libspdk_bdev_raid.a 00:04:26.019 SO libspdk_bdev_raid.so.5.0 00:04:26.019 SYMLINK libspdk_bdev_raid.so 00:04:26.019 LIB libspdk_bdev_iscsi.a 00:04:26.020 SO libspdk_bdev_iscsi.so.5.0 00:04:26.277 SYMLINK libspdk_bdev_iscsi.so 00:04:26.277 LIB libspdk_bdev_virtio.a 00:04:26.277 SO libspdk_bdev_virtio.so.5.0 00:04:26.277 SYMLINK libspdk_bdev_virtio.so 00:04:26.535 LIB libspdk_bdev_nvme.a 00:04:26.535 SO libspdk_bdev_nvme.so.6.0 00:04:26.793 SYMLINK libspdk_bdev_nvme.so 00:04:27.050 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:27.050 CC module/event/subsystems/iobuf/iobuf.o 00:04:27.050 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:27.050 CC module/event/subsystems/vmd/vmd.o 00:04:27.050 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:27.050 CC module/event/subsystems/sock/sock.o 00:04:27.050 CC module/event/subsystems/scheduler/scheduler.o 00:04:27.307 LIB libspdk_event_sock.a 00:04:27.307 LIB libspdk_event_vhost_blk.a 00:04:27.307 LIB libspdk_event_scheduler.a 00:04:27.307 LIB libspdk_event_iobuf.a 00:04:27.307 LIB libspdk_event_vmd.a 00:04:27.307 SO libspdk_event_vhost_blk.so.2.0 00:04:27.307 SO libspdk_event_sock.so.4.0 00:04:27.307 SO libspdk_event_scheduler.so.3.0 00:04:27.307 SO libspdk_event_iobuf.so.2.0 00:04:27.307 SO libspdk_event_vmd.so.5.0 00:04:27.307 SYMLINK libspdk_event_sock.so 00:04:27.307 SYMLINK libspdk_event_vhost_blk.so 00:04:27.307 SYMLINK libspdk_event_scheduler.so 00:04:27.307 SYMLINK libspdk_event_iobuf.so 00:04:27.307 SYMLINK libspdk_event_vmd.so 00:04:27.565 CC module/event/subsystems/accel/accel.o 00:04:27.822 LIB libspdk_event_accel.a 00:04:27.822 SO libspdk_event_accel.so.5.0 00:04:27.822 SYMLINK libspdk_event_accel.so 00:04:28.079 CC module/event/subsystems/bdev/bdev.o 00:04:28.335 LIB libspdk_event_bdev.a 00:04:28.335 SO libspdk_event_bdev.so.5.0 00:04:28.335 SYMLINK libspdk_event_bdev.so 00:04:28.593 CC module/event/subsystems/scsi/scsi.o 00:04:28.593 CC module/event/subsystems/nbd/nbd.o 00:04:28.593 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:28.593 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:28.593 CC module/event/subsystems/ublk/ublk.o 00:04:28.593 LIB libspdk_event_nbd.a 00:04:28.593 LIB libspdk_event_scsi.a 00:04:28.851 LIB libspdk_event_ublk.a 00:04:28.851 SO libspdk_event_nbd.so.5.0 00:04:28.851 SO libspdk_event_ublk.so.2.0 00:04:28.851 SO libspdk_event_scsi.so.5.0 00:04:28.851 SYMLINK libspdk_event_nbd.so 00:04:28.851 LIB libspdk_event_nvmf.a 00:04:28.851 SYMLINK libspdk_event_ublk.so 00:04:28.851 SYMLINK libspdk_event_scsi.so 00:04:28.851 SO libspdk_event_nvmf.so.5.0 00:04:28.851 SYMLINK libspdk_event_nvmf.so 00:04:29.109 CC module/event/subsystems/iscsi/iscsi.o 00:04:29.109 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:29.109 LIB libspdk_event_vhost_scsi.a 00:04:29.109 LIB libspdk_event_iscsi.a 00:04:29.109 SO libspdk_event_vhost_scsi.so.2.0 00:04:29.367 SO libspdk_event_iscsi.so.5.0 00:04:29.367 SYMLINK libspdk_event_vhost_scsi.so 00:04:29.367 SYMLINK libspdk_event_iscsi.so 00:04:29.367 SO libspdk.so.5.0 00:04:29.367 SYMLINK libspdk.so 00:04:29.624 TEST_HEADER include/spdk/accel.h 00:04:29.624 TEST_HEADER include/spdk/accel_module.h 00:04:29.624 TEST_HEADER include/spdk/assert.h 00:04:29.624 CXX app/trace/trace.o 00:04:29.624 TEST_HEADER include/spdk/barrier.h 00:04:29.625 TEST_HEADER include/spdk/base64.h 00:04:29.625 TEST_HEADER include/spdk/bdev.h 00:04:29.625 TEST_HEADER include/spdk/bdev_module.h 00:04:29.625 TEST_HEADER include/spdk/bdev_zone.h 00:04:29.625 TEST_HEADER include/spdk/bit_array.h 00:04:29.625 TEST_HEADER include/spdk/bit_pool.h 00:04:29.625 TEST_HEADER include/spdk/blob_bdev.h 00:04:29.625 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:29.625 TEST_HEADER include/spdk/blobfs.h 00:04:29.625 TEST_HEADER include/spdk/blob.h 00:04:29.625 TEST_HEADER include/spdk/conf.h 00:04:29.625 TEST_HEADER include/spdk/config.h 00:04:29.625 TEST_HEADER include/spdk/cpuset.h 00:04:29.625 TEST_HEADER include/spdk/crc16.h 00:04:29.625 TEST_HEADER include/spdk/crc32.h 00:04:29.625 TEST_HEADER include/spdk/crc64.h 00:04:29.625 TEST_HEADER include/spdk/dif.h 00:04:29.625 TEST_HEADER include/spdk/dma.h 00:04:29.625 TEST_HEADER include/spdk/endian.h 00:04:29.625 TEST_HEADER include/spdk/env_dpdk.h 00:04:29.625 TEST_HEADER include/spdk/env.h 00:04:29.625 TEST_HEADER include/spdk/event.h 00:04:29.625 TEST_HEADER include/spdk/fd_group.h 00:04:29.625 TEST_HEADER include/spdk/fd.h 00:04:29.625 CC examples/accel/perf/accel_perf.o 00:04:29.625 TEST_HEADER include/spdk/file.h 00:04:29.625 TEST_HEADER include/spdk/ftl.h 00:04:29.625 TEST_HEADER include/spdk/gpt_spec.h 00:04:29.625 TEST_HEADER include/spdk/hexlify.h 00:04:29.625 TEST_HEADER include/spdk/histogram_data.h 00:04:29.625 TEST_HEADER include/spdk/idxd.h 00:04:29.625 TEST_HEADER include/spdk/idxd_spec.h 00:04:29.625 TEST_HEADER include/spdk/init.h 00:04:29.625 TEST_HEADER include/spdk/ioat.h 00:04:29.625 TEST_HEADER include/spdk/ioat_spec.h 00:04:29.625 TEST_HEADER include/spdk/iscsi_spec.h 00:04:29.625 CC examples/blob/hello_world/hello_blob.o 00:04:29.625 TEST_HEADER include/spdk/json.h 00:04:29.625 CC test/blobfs/mkfs/mkfs.o 00:04:29.625 TEST_HEADER include/spdk/jsonrpc.h 00:04:29.625 TEST_HEADER include/spdk/likely.h 00:04:29.625 TEST_HEADER include/spdk/log.h 00:04:29.625 TEST_HEADER include/spdk/lvol.h 00:04:29.625 CC test/bdev/bdevio/bdevio.o 00:04:29.625 CC test/accel/dif/dif.o 00:04:29.625 TEST_HEADER include/spdk/memory.h 00:04:29.625 CC examples/bdev/hello_world/hello_bdev.o 00:04:29.625 TEST_HEADER include/spdk/mmio.h 00:04:29.625 TEST_HEADER include/spdk/nbd.h 00:04:29.625 TEST_HEADER include/spdk/notify.h 00:04:29.625 TEST_HEADER include/spdk/nvme.h 00:04:29.625 CC test/dma/test_dma/test_dma.o 00:04:29.625 TEST_HEADER include/spdk/nvme_intel.h 00:04:29.625 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:29.625 CC test/app/bdev_svc/bdev_svc.o 00:04:29.625 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:29.625 TEST_HEADER include/spdk/nvme_spec.h 00:04:29.625 TEST_HEADER include/spdk/nvme_zns.h 00:04:29.883 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:29.883 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:29.883 TEST_HEADER include/spdk/nvmf.h 00:04:29.883 TEST_HEADER include/spdk/nvmf_spec.h 00:04:29.883 TEST_HEADER include/spdk/nvmf_transport.h 00:04:29.883 TEST_HEADER include/spdk/opal.h 00:04:29.883 TEST_HEADER include/spdk/opal_spec.h 00:04:29.883 TEST_HEADER include/spdk/pci_ids.h 00:04:29.883 TEST_HEADER include/spdk/pipe.h 00:04:29.883 TEST_HEADER include/spdk/queue.h 00:04:29.883 TEST_HEADER include/spdk/reduce.h 00:04:29.883 TEST_HEADER include/spdk/rpc.h 00:04:29.883 TEST_HEADER include/spdk/scheduler.h 00:04:29.883 TEST_HEADER include/spdk/scsi.h 00:04:29.883 TEST_HEADER include/spdk/scsi_spec.h 00:04:29.883 TEST_HEADER include/spdk/sock.h 00:04:29.883 TEST_HEADER include/spdk/stdinc.h 00:04:29.883 TEST_HEADER include/spdk/string.h 00:04:29.883 TEST_HEADER include/spdk/thread.h 00:04:29.883 TEST_HEADER include/spdk/trace.h 00:04:29.883 TEST_HEADER include/spdk/trace_parser.h 00:04:29.883 TEST_HEADER include/spdk/tree.h 00:04:29.883 TEST_HEADER include/spdk/ublk.h 00:04:29.883 TEST_HEADER include/spdk/util.h 00:04:29.883 TEST_HEADER include/spdk/uuid.h 00:04:29.883 TEST_HEADER include/spdk/version.h 00:04:29.883 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:29.883 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:29.883 TEST_HEADER include/spdk/vhost.h 00:04:29.883 TEST_HEADER include/spdk/vmd.h 00:04:29.883 TEST_HEADER include/spdk/xor.h 00:04:29.883 TEST_HEADER include/spdk/zipf.h 00:04:29.883 CXX test/cpp_headers/accel.o 00:04:29.883 LINK hello_bdev 00:04:29.883 LINK bdev_svc 00:04:29.883 LINK mkfs 00:04:29.883 LINK hello_blob 00:04:30.142 LINK spdk_trace 00:04:30.142 CXX test/cpp_headers/accel_module.o 00:04:30.142 LINK dif 00:04:30.142 LINK test_dma 00:04:30.142 LINK accel_perf 00:04:30.142 LINK bdevio 00:04:30.404 CC test/app/histogram_perf/histogram_perf.o 00:04:30.404 CC examples/bdev/bdevperf/bdevperf.o 00:04:30.404 CC app/trace_record/trace_record.o 00:04:30.404 CXX test/cpp_headers/assert.o 00:04:30.404 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:30.404 CXX test/cpp_headers/barrier.o 00:04:30.404 CC examples/blob/cli/blobcli.o 00:04:30.404 CXX test/cpp_headers/base64.o 00:04:30.404 CXX test/cpp_headers/bdev.o 00:04:30.404 LINK histogram_perf 00:04:30.662 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:30.662 LINK spdk_trace_record 00:04:30.662 CC test/env/mem_callbacks/mem_callbacks.o 00:04:30.662 CXX test/cpp_headers/bdev_module.o 00:04:30.662 CXX test/cpp_headers/bdev_zone.o 00:04:30.662 CXX test/cpp_headers/bit_array.o 00:04:30.662 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:30.662 LINK nvme_fuzz 00:04:30.662 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:30.662 CXX test/cpp_headers/bit_pool.o 00:04:30.920 CC app/nvmf_tgt/nvmf_main.o 00:04:30.920 LINK blobcli 00:04:30.920 CC app/iscsi_tgt/iscsi_tgt.o 00:04:30.920 CXX test/cpp_headers/blob_bdev.o 00:04:30.920 CC app/spdk_tgt/spdk_tgt.o 00:04:30.920 LINK nvmf_tgt 00:04:30.920 LINK bdevperf 00:04:31.178 CC app/spdk_lspci/spdk_lspci.o 00:04:31.178 LINK iscsi_tgt 00:04:31.178 CXX test/cpp_headers/blobfs_bdev.o 00:04:31.178 CC app/spdk_nvme_perf/perf.o 00:04:31.178 LINK spdk_tgt 00:04:31.178 LINK vhost_fuzz 00:04:31.178 LINK mem_callbacks 00:04:31.178 LINK spdk_lspci 00:04:31.178 CXX test/cpp_headers/blobfs.o 00:04:31.178 CXX test/cpp_headers/blob.o 00:04:31.437 CC test/env/vtophys/vtophys.o 00:04:31.437 CC examples/ioat/perf/perf.o 00:04:31.437 CC examples/nvme/hello_world/hello_world.o 00:04:31.437 CC examples/ioat/verify/verify.o 00:04:31.437 CC examples/sock/hello_world/hello_sock.o 00:04:31.437 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:31.437 CXX test/cpp_headers/conf.o 00:04:31.437 CC app/spdk_nvme_identify/identify.o 00:04:31.437 LINK vtophys 00:04:31.695 LINK env_dpdk_post_init 00:04:31.696 LINK ioat_perf 00:04:31.696 LINK verify 00:04:31.696 LINK hello_world 00:04:31.696 CXX test/cpp_headers/config.o 00:04:31.696 LINK hello_sock 00:04:31.696 CXX test/cpp_headers/cpuset.o 00:04:31.696 CXX test/cpp_headers/crc16.o 00:04:31.696 CXX test/cpp_headers/crc32.o 00:04:31.696 CXX test/cpp_headers/crc64.o 00:04:31.696 CC test/env/memory/memory_ut.o 00:04:31.954 CC examples/nvme/reconnect/reconnect.o 00:04:31.954 CC app/spdk_nvme_discover/discovery_aer.o 00:04:31.954 CC app/spdk_top/spdk_top.o 00:04:31.954 LINK spdk_nvme_perf 00:04:31.954 CXX test/cpp_headers/dif.o 00:04:31.954 CC app/vhost/vhost.o 00:04:31.954 CC app/spdk_dd/spdk_dd.o 00:04:31.954 LINK spdk_nvme_discover 00:04:32.212 CXX test/cpp_headers/dma.o 00:04:32.212 CXX test/cpp_headers/endian.o 00:04:32.212 LINK iscsi_fuzz 00:04:32.212 LINK vhost 00:04:32.212 LINK reconnect 00:04:32.212 LINK spdk_nvme_identify 00:04:32.212 CXX test/cpp_headers/env_dpdk.o 00:04:32.212 CC test/app/jsoncat/jsoncat.o 00:04:32.470 LINK spdk_dd 00:04:32.470 CC app/fio/nvme/fio_plugin.o 00:04:32.470 CXX test/cpp_headers/env.o 00:04:32.470 LINK jsoncat 00:04:32.470 CC app/fio/bdev/fio_plugin.o 00:04:32.470 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:32.470 CC examples/vmd/lsvmd/lsvmd.o 00:04:32.470 CC examples/nvmf/nvmf/nvmf.o 00:04:32.729 CXX test/cpp_headers/event.o 00:04:32.729 LINK lsvmd 00:04:32.729 CC test/app/stub/stub.o 00:04:32.729 CC examples/vmd/led/led.o 00:04:32.729 LINK spdk_top 00:04:32.729 LINK memory_ut 00:04:32.729 LINK led 00:04:32.729 LINK stub 00:04:32.729 CXX test/cpp_headers/fd_group.o 00:04:32.987 LINK nvmf 00:04:32.987 CXX test/cpp_headers/fd.o 00:04:32.987 CC test/event/event_perf/event_perf.o 00:04:32.987 LINK spdk_nvme 00:04:32.987 LINK nvme_manage 00:04:32.987 LINK spdk_bdev 00:04:32.987 CXX test/cpp_headers/file.o 00:04:32.987 CC test/env/pci/pci_ut.o 00:04:32.987 CC examples/nvme/arbitration/arbitration.o 00:04:32.987 LINK event_perf 00:04:32.987 CC test/nvme/aer/aer.o 00:04:32.987 CXX test/cpp_headers/ftl.o 00:04:33.244 CC test/nvme/reset/reset.o 00:04:33.244 CC test/nvme/sgl/sgl.o 00:04:33.244 CXX test/cpp_headers/gpt_spec.o 00:04:33.244 CC test/lvol/esnap/esnap.o 00:04:33.244 CC examples/nvme/hotplug/hotplug.o 00:04:33.244 CC test/event/reactor/reactor.o 00:04:33.244 CXX test/cpp_headers/hexlify.o 00:04:33.244 LINK reset 00:04:33.501 CC test/event/reactor_perf/reactor_perf.o 00:04:33.501 LINK sgl 00:04:33.501 LINK aer 00:04:33.501 LINK arbitration 00:04:33.501 LINK pci_ut 00:04:33.501 LINK reactor 00:04:33.501 LINK hotplug 00:04:33.501 CXX test/cpp_headers/histogram_data.o 00:04:33.501 CXX test/cpp_headers/idxd.o 00:04:33.501 LINK reactor_perf 00:04:33.501 CXX test/cpp_headers/idxd_spec.o 00:04:33.501 CXX test/cpp_headers/init.o 00:04:33.758 CC test/rpc_client/rpc_client_test.o 00:04:33.758 CC test/nvme/e2edp/nvme_dp.o 00:04:33.758 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:33.758 CXX test/cpp_headers/ioat.o 00:04:33.758 CC test/nvme/overhead/overhead.o 00:04:33.758 CXX test/cpp_headers/ioat_spec.o 00:04:33.758 CC test/event/app_repeat/app_repeat.o 00:04:33.758 CC test/thread/poller_perf/poller_perf.o 00:04:33.758 LINK rpc_client_test 00:04:33.758 CC examples/util/zipf/zipf.o 00:04:34.017 LINK cmb_copy 00:04:34.017 LINK app_repeat 00:04:34.017 CXX test/cpp_headers/iscsi_spec.o 00:04:34.017 LINK nvme_dp 00:04:34.017 LINK poller_perf 00:04:34.017 CXX test/cpp_headers/json.o 00:04:34.017 CC test/nvme/err_injection/err_injection.o 00:04:34.017 LINK zipf 00:04:34.017 LINK overhead 00:04:34.017 CXX test/cpp_headers/jsonrpc.o 00:04:34.275 CC examples/nvme/abort/abort.o 00:04:34.275 LINK err_injection 00:04:34.275 CC test/nvme/startup/startup.o 00:04:34.275 CC test/event/scheduler/scheduler.o 00:04:34.275 CC examples/thread/thread/thread_ex.o 00:04:34.275 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:34.275 CC test/nvme/reserve/reserve.o 00:04:34.275 CC examples/idxd/perf/perf.o 00:04:34.275 CXX test/cpp_headers/likely.o 00:04:34.275 LINK startup 00:04:34.533 LINK pmr_persistence 00:04:34.533 LINK scheduler 00:04:34.533 CXX test/cpp_headers/log.o 00:04:34.533 LINK reserve 00:04:34.533 LINK thread 00:04:34.533 LINK abort 00:04:34.533 LINK idxd_perf 00:04:34.533 CC test/nvme/simple_copy/simple_copy.o 00:04:34.533 CXX test/cpp_headers/lvol.o 00:04:34.793 CC test/nvme/connect_stress/connect_stress.o 00:04:34.793 CC test/nvme/boot_partition/boot_partition.o 00:04:34.793 CXX test/cpp_headers/memory.o 00:04:34.793 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:34.793 CC test/nvme/compliance/nvme_compliance.o 00:04:34.793 CC test/nvme/fused_ordering/fused_ordering.o 00:04:34.793 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:34.793 LINK simple_copy 00:04:34.793 CXX test/cpp_headers/mmio.o 00:04:34.793 LINK connect_stress 00:04:34.793 LINK boot_partition 00:04:35.051 LINK interrupt_tgt 00:04:35.051 LINK fused_ordering 00:04:35.051 CXX test/cpp_headers/nbd.o 00:04:35.051 CXX test/cpp_headers/notify.o 00:04:35.051 LINK doorbell_aers 00:04:35.051 CXX test/cpp_headers/nvme.o 00:04:35.051 CC test/nvme/fdp/fdp.o 00:04:35.051 CC test/nvme/cuse/cuse.o 00:04:35.051 CXX test/cpp_headers/nvme_intel.o 00:04:35.051 LINK nvme_compliance 00:04:35.310 CXX test/cpp_headers/nvme_ocssd.o 00:04:35.310 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:35.310 CXX test/cpp_headers/nvme_spec.o 00:04:35.310 CXX test/cpp_headers/nvme_zns.o 00:04:35.310 CXX test/cpp_headers/nvmf_cmd.o 00:04:35.310 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:35.310 CXX test/cpp_headers/nvmf.o 00:04:35.310 CXX test/cpp_headers/nvmf_spec.o 00:04:35.310 LINK fdp 00:04:35.310 CXX test/cpp_headers/nvmf_transport.o 00:04:35.569 CXX test/cpp_headers/opal.o 00:04:35.569 CXX test/cpp_headers/opal_spec.o 00:04:35.569 CXX test/cpp_headers/pci_ids.o 00:04:35.569 CXX test/cpp_headers/pipe.o 00:04:35.569 CXX test/cpp_headers/queue.o 00:04:35.569 CXX test/cpp_headers/reduce.o 00:04:35.569 CXX test/cpp_headers/rpc.o 00:04:35.827 CXX test/cpp_headers/scheduler.o 00:04:35.827 CXX test/cpp_headers/scsi.o 00:04:35.827 CXX test/cpp_headers/scsi_spec.o 00:04:35.827 CXX test/cpp_headers/sock.o 00:04:35.827 CXX test/cpp_headers/stdinc.o 00:04:35.827 CXX test/cpp_headers/string.o 00:04:35.827 CXX test/cpp_headers/thread.o 00:04:35.827 CXX test/cpp_headers/trace.o 00:04:35.827 CXX test/cpp_headers/trace_parser.o 00:04:36.085 CXX test/cpp_headers/tree.o 00:04:36.085 CXX test/cpp_headers/ublk.o 00:04:36.085 CXX test/cpp_headers/util.o 00:04:36.085 CXX test/cpp_headers/uuid.o 00:04:36.085 CXX test/cpp_headers/version.o 00:04:36.085 CXX test/cpp_headers/vfio_user_pci.o 00:04:36.085 CXX test/cpp_headers/vfio_user_spec.o 00:04:36.085 CXX test/cpp_headers/vhost.o 00:04:36.085 CXX test/cpp_headers/vmd.o 00:04:36.085 CXX test/cpp_headers/xor.o 00:04:36.085 CXX test/cpp_headers/zipf.o 00:04:36.085 LINK cuse 00:04:37.461 LINK esnap 00:04:40.745 00:04:40.745 real 0m55.399s 00:04:40.745 user 5m10.110s 00:04:40.745 sys 1m9.163s 00:04:40.745 ************************************ 00:04:40.745 END TEST make 00:04:40.745 ************************************ 00:04:40.745 09:57:11 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:40.745 09:57:11 -- common/autotest_common.sh@10 -- $ set +x 00:04:41.004 09:57:11 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:41.004 09:57:11 -- nvmf/common.sh@7 -- # uname -s 00:04:41.004 09:57:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.004 09:57:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.004 09:57:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.004 09:57:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.004 09:57:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:41.004 09:57:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:41.004 09:57:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.004 09:57:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:41.004 09:57:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.004 09:57:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:41.004 09:57:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:04:41.004 09:57:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:04:41.004 09:57:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.004 09:57:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:41.004 09:57:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:41.004 09:57:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:41.004 09:57:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.004 09:57:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.004 09:57:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.004 09:57:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.004 09:57:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.004 09:57:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.004 09:57:11 -- paths/export.sh@5 -- # export PATH 00:04:41.004 09:57:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.004 09:57:11 -- nvmf/common.sh@46 -- # : 0 00:04:41.004 09:57:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:41.004 09:57:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:41.004 09:57:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:41.004 09:57:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.004 09:57:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.004 09:57:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:41.004 09:57:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:41.004 09:57:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:41.004 09:57:11 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:41.004 09:57:11 -- spdk/autotest.sh@32 -- # uname -s 00:04:41.004 09:57:11 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:41.004 09:57:11 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:41.004 09:57:11 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:41.004 09:57:11 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:41.004 09:57:11 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:41.004 09:57:11 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:41.004 09:57:11 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:41.004 09:57:11 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:41.004 09:57:11 -- spdk/autotest.sh@48 -- # udevadm_pid=61724 00:04:41.004 09:57:11 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:41.004 09:57:11 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:41.262 09:57:11 -- spdk/autotest.sh@54 -- # echo 61729 00:04:41.262 09:57:11 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:41.262 09:57:11 -- spdk/autotest.sh@56 -- # echo 61735 00:04:41.262 09:57:11 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:41.262 09:57:11 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:41.262 09:57:11 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:41.262 09:57:11 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:41.262 09:57:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:41.262 09:57:11 -- common/autotest_common.sh@10 -- # set +x 00:04:41.262 09:57:11 -- spdk/autotest.sh@70 -- # create_test_list 00:04:41.262 09:57:11 -- common/autotest_common.sh@736 -- # xtrace_disable 00:04:41.262 09:57:11 -- common/autotest_common.sh@10 -- # set +x 00:04:41.262 09:57:11 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:41.262 09:57:11 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:41.262 09:57:11 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:41.262 09:57:11 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:41.262 09:57:11 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:41.262 09:57:11 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:41.262 09:57:11 -- common/autotest_common.sh@1440 -- # uname 00:04:41.262 09:57:11 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:04:41.262 09:57:11 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:41.262 09:57:11 -- common/autotest_common.sh@1460 -- # uname 00:04:41.262 09:57:11 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:04:41.262 09:57:11 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:04:41.262 09:57:11 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:04:41.262 09:57:11 -- spdk/autotest.sh@83 -- # hash lcov 00:04:41.262 09:57:11 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:41.262 09:57:11 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:04:41.262 --rc lcov_branch_coverage=1 00:04:41.262 --rc lcov_function_coverage=1 00:04:41.262 --rc genhtml_branch_coverage=1 00:04:41.262 --rc genhtml_function_coverage=1 00:04:41.262 --rc genhtml_legend=1 00:04:41.262 --rc geninfo_all_blocks=1 00:04:41.262 ' 00:04:41.262 09:57:11 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:04:41.262 --rc lcov_branch_coverage=1 00:04:41.262 --rc lcov_function_coverage=1 00:04:41.262 --rc genhtml_branch_coverage=1 00:04:41.262 --rc genhtml_function_coverage=1 00:04:41.262 --rc genhtml_legend=1 00:04:41.262 --rc geninfo_all_blocks=1 00:04:41.262 ' 00:04:41.262 09:57:11 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:04:41.262 --rc lcov_branch_coverage=1 00:04:41.262 --rc lcov_function_coverage=1 00:04:41.262 --rc genhtml_branch_coverage=1 00:04:41.262 --rc genhtml_function_coverage=1 00:04:41.262 --rc genhtml_legend=1 00:04:41.262 --rc geninfo_all_blocks=1 00:04:41.262 --no-external' 00:04:41.263 09:57:11 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:04:41.263 --rc lcov_branch_coverage=1 00:04:41.263 --rc lcov_function_coverage=1 00:04:41.263 --rc genhtml_branch_coverage=1 00:04:41.263 --rc genhtml_function_coverage=1 00:04:41.263 --rc genhtml_legend=1 00:04:41.263 --rc geninfo_all_blocks=1 00:04:41.263 --no-external' 00:04:41.263 09:57:11 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:41.263 lcov: LCOV version 1.14 00:04:41.263 09:57:11 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:49.384 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:49.384 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:49.384 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:49.384 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:04:49.384 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:49.384 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:07.490 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:07.490 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:07.491 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:07.491 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:08.863 09:57:39 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:05:08.863 09:57:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:08.863 09:57:39 -- common/autotest_common.sh@10 -- # set +x 00:05:08.863 09:57:39 -- spdk/autotest.sh@102 -- # rm -f 00:05:08.863 09:57:39 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:09.430 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:09.430 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:05:09.430 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:05:09.430 09:57:39 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:05:09.430 09:57:39 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:09.430 09:57:39 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:09.430 09:57:39 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:09.430 09:57:39 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:09.430 09:57:39 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:09.430 09:57:39 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:09.430 09:57:39 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:09.430 09:57:39 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:09.430 09:57:39 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:09.430 09:57:39 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:05:09.430 09:57:39 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:05:09.430 09:57:39 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:09.430 09:57:39 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:09.430 09:57:39 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:09.430 09:57:39 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:05:09.430 09:57:39 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:05:09.430 09:57:39 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:09.430 09:57:39 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:09.430 09:57:39 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:09.430 09:57:39 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:05:09.430 09:57:39 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:05:09.430 09:57:39 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:09.430 09:57:39 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:09.430 09:57:39 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:05:09.430 09:57:39 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:05:09.430 09:57:39 -- spdk/autotest.sh@121 -- # grep -v p 00:05:09.430 09:57:39 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:09.430 09:57:39 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:09.430 09:57:39 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:05:09.430 09:57:39 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:05:09.430 09:57:39 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:09.430 No valid GPT data, bailing 00:05:09.430 09:57:39 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:09.688 09:57:39 -- scripts/common.sh@393 -- # pt= 00:05:09.688 09:57:39 -- scripts/common.sh@394 -- # return 1 00:05:09.688 09:57:39 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:09.688 1+0 records in 00:05:09.688 1+0 records out 00:05:09.688 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00414088 s, 253 MB/s 00:05:09.688 09:57:39 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:09.688 09:57:39 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:09.688 09:57:39 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:05:09.688 09:57:39 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:05:09.688 09:57:39 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:09.688 No valid GPT data, bailing 00:05:09.689 09:57:40 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:09.689 09:57:40 -- scripts/common.sh@393 -- # pt= 00:05:09.689 09:57:40 -- scripts/common.sh@394 -- # return 1 00:05:09.689 09:57:40 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:09.689 1+0 records in 00:05:09.689 1+0 records out 00:05:09.689 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00312163 s, 336 MB/s 00:05:09.689 09:57:40 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:09.689 09:57:40 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:09.689 09:57:40 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n2 00:05:09.689 09:57:40 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:05:09.689 09:57:40 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:09.689 No valid GPT data, bailing 00:05:09.689 09:57:40 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:09.689 09:57:40 -- scripts/common.sh@393 -- # pt= 00:05:09.689 09:57:40 -- scripts/common.sh@394 -- # return 1 00:05:09.689 09:57:40 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:09.689 1+0 records in 00:05:09.689 1+0 records out 00:05:09.689 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0044527 s, 235 MB/s 00:05:09.689 09:57:40 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:09.689 09:57:40 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:09.689 09:57:40 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n3 00:05:09.689 09:57:40 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:05:09.689 09:57:40 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:09.689 No valid GPT data, bailing 00:05:09.689 09:57:40 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:09.689 09:57:40 -- scripts/common.sh@393 -- # pt= 00:05:09.689 09:57:40 -- scripts/common.sh@394 -- # return 1 00:05:09.689 09:57:40 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:09.689 1+0 records in 00:05:09.689 1+0 records out 00:05:09.689 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00448727 s, 234 MB/s 00:05:09.689 09:57:40 -- spdk/autotest.sh@129 -- # sync 00:05:09.947 09:57:40 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:09.947 09:57:40 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:09.947 09:57:40 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:11.845 09:57:42 -- spdk/autotest.sh@135 -- # uname -s 00:05:11.845 09:57:42 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:05:11.845 09:57:42 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:11.845 09:57:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:11.845 09:57:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:11.845 09:57:42 -- common/autotest_common.sh@10 -- # set +x 00:05:11.845 ************************************ 00:05:11.845 START TEST setup.sh 00:05:11.845 ************************************ 00:05:11.845 09:57:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:11.845 * Looking for test storage... 00:05:11.845 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:11.845 09:57:42 -- setup/test-setup.sh@10 -- # uname -s 00:05:11.845 09:57:42 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:11.845 09:57:42 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:11.845 09:57:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:11.845 09:57:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:11.845 09:57:42 -- common/autotest_common.sh@10 -- # set +x 00:05:11.845 ************************************ 00:05:11.845 START TEST acl 00:05:11.845 ************************************ 00:05:11.845 09:57:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:11.845 * Looking for test storage... 00:05:11.845 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:11.845 09:57:42 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:11.845 09:57:42 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:11.845 09:57:42 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:11.845 09:57:42 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:11.845 09:57:42 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:11.845 09:57:42 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:11.845 09:57:42 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:11.845 09:57:42 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:11.845 09:57:42 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:11.845 09:57:42 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:11.845 09:57:42 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:05:11.845 09:57:42 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:05:11.845 09:57:42 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:11.845 09:57:42 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:11.845 09:57:42 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:11.845 09:57:42 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:05:11.845 09:57:42 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:05:11.845 09:57:42 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:11.845 09:57:42 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:11.845 09:57:42 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:11.845 09:57:42 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:05:11.845 09:57:42 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:05:11.845 09:57:42 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:11.845 09:57:42 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:11.845 09:57:42 -- setup/acl.sh@12 -- # devs=() 00:05:11.845 09:57:42 -- setup/acl.sh@12 -- # declare -a devs 00:05:11.845 09:57:42 -- setup/acl.sh@13 -- # drivers=() 00:05:11.845 09:57:42 -- setup/acl.sh@13 -- # declare -A drivers 00:05:11.845 09:57:42 -- setup/acl.sh@51 -- # setup reset 00:05:11.845 09:57:42 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:11.845 09:57:42 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:12.779 09:57:43 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:12.779 09:57:43 -- setup/acl.sh@16 -- # local dev driver 00:05:12.779 09:57:43 -- setup/acl.sh@15 -- # setup output status 00:05:12.779 09:57:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:12.779 09:57:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:12.779 09:57:43 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:12.779 Hugepages 00:05:12.779 node hugesize free / total 00:05:12.779 09:57:43 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:12.779 09:57:43 -- setup/acl.sh@19 -- # continue 00:05:12.779 09:57:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:12.779 00:05:12.779 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:12.779 09:57:43 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:12.779 09:57:43 -- setup/acl.sh@19 -- # continue 00:05:12.779 09:57:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:12.779 09:57:43 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:12.779 09:57:43 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:12.779 09:57:43 -- setup/acl.sh@20 -- # continue 00:05:12.779 09:57:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:13.037 09:57:43 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:05:13.037 09:57:43 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:13.037 09:57:43 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:13.037 09:57:43 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:13.037 09:57:43 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:13.037 09:57:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:13.037 09:57:43 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:05:13.037 09:57:43 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:13.037 09:57:43 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:13.037 09:57:43 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:13.037 09:57:43 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:13.037 09:57:43 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:13.037 09:57:43 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:13.037 09:57:43 -- setup/acl.sh@54 -- # run_test denied denied 00:05:13.037 09:57:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:13.037 09:57:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.037 09:57:43 -- common/autotest_common.sh@10 -- # set +x 00:05:13.037 ************************************ 00:05:13.037 START TEST denied 00:05:13.037 ************************************ 00:05:13.037 09:57:43 -- common/autotest_common.sh@1104 -- # denied 00:05:13.037 09:57:43 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:05:13.037 09:57:43 -- setup/acl.sh@38 -- # setup output config 00:05:13.037 09:57:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:13.037 09:57:43 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:13.037 09:57:43 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:05:13.971 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:05:13.971 09:57:44 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:05:13.971 09:57:44 -- setup/acl.sh@28 -- # local dev driver 00:05:13.971 09:57:44 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:13.971 09:57:44 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:05:13.971 09:57:44 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:05:13.971 09:57:44 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:13.971 09:57:44 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:13.971 09:57:44 -- setup/acl.sh@41 -- # setup reset 00:05:13.971 09:57:44 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:13.971 09:57:44 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:14.539 00:05:14.539 real 0m1.496s 00:05:14.539 user 0m0.589s 00:05:14.539 sys 0m0.813s 00:05:14.539 09:57:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.539 ************************************ 00:05:14.539 END TEST denied 00:05:14.539 09:57:44 -- common/autotest_common.sh@10 -- # set +x 00:05:14.539 ************************************ 00:05:14.539 09:57:44 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:14.539 09:57:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:14.539 09:57:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:14.539 09:57:44 -- common/autotest_common.sh@10 -- # set +x 00:05:14.539 ************************************ 00:05:14.539 START TEST allowed 00:05:14.539 ************************************ 00:05:14.539 09:57:44 -- common/autotest_common.sh@1104 -- # allowed 00:05:14.539 09:57:44 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:05:14.539 09:57:44 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:05:14.539 09:57:44 -- setup/acl.sh@45 -- # setup output config 00:05:14.539 09:57:44 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.539 09:57:44 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:15.475 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:15.475 09:57:45 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:05:15.475 09:57:45 -- setup/acl.sh@28 -- # local dev driver 00:05:15.475 09:57:45 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:15.475 09:57:45 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:05:15.475 09:57:45 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:05:15.475 09:57:45 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:15.475 09:57:45 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:15.475 09:57:45 -- setup/acl.sh@48 -- # setup reset 00:05:15.475 09:57:45 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:15.475 09:57:45 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:16.041 ************************************ 00:05:16.041 END TEST allowed 00:05:16.041 ************************************ 00:05:16.041 00:05:16.041 real 0m1.476s 00:05:16.041 user 0m0.652s 00:05:16.041 sys 0m0.811s 00:05:16.041 09:57:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.041 09:57:46 -- common/autotest_common.sh@10 -- # set +x 00:05:16.041 ************************************ 00:05:16.041 END TEST acl 00:05:16.041 ************************************ 00:05:16.041 00:05:16.041 real 0m4.258s 00:05:16.041 user 0m1.828s 00:05:16.041 sys 0m2.345s 00:05:16.041 09:57:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.041 09:57:46 -- common/autotest_common.sh@10 -- # set +x 00:05:16.041 09:57:46 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:16.041 09:57:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:16.041 09:57:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:16.041 09:57:46 -- common/autotest_common.sh@10 -- # set +x 00:05:16.041 ************************************ 00:05:16.041 START TEST hugepages 00:05:16.041 ************************************ 00:05:16.041 09:57:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:16.300 * Looking for test storage... 00:05:16.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:16.300 09:57:46 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:16.300 09:57:46 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:16.300 09:57:46 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:16.300 09:57:46 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:16.300 09:57:46 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:16.300 09:57:46 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:16.300 09:57:46 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:16.300 09:57:46 -- setup/common.sh@18 -- # local node= 00:05:16.300 09:57:46 -- setup/common.sh@19 -- # local var val 00:05:16.300 09:57:46 -- setup/common.sh@20 -- # local mem_f mem 00:05:16.300 09:57:46 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:16.300 09:57:46 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:16.300 09:57:46 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:16.300 09:57:46 -- setup/common.sh@28 -- # mapfile -t mem 00:05:16.300 09:57:46 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:16.300 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.300 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 4464336 kB' 'MemAvailable: 7366336 kB' 'Buffers: 2436 kB' 'Cached: 3103172 kB' 'SwapCached: 0 kB' 'Active: 473916 kB' 'Inactive: 2732892 kB' 'Active(anon): 111692 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732892 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 102972 kB' 'Mapped: 48632 kB' 'Shmem: 10492 kB' 'KReclaimable: 87648 kB' 'Slab: 167452 kB' 'SReclaimable: 87648 kB' 'SUnreclaim: 79804 kB' 'KernelStack: 6512 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 330080 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 6109184 kB' 'DirectMap1G: 8388608 kB' 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.301 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.301 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.302 09:57:46 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.302 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.302 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.302 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.302 09:57:46 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.302 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.302 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.302 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.302 09:57:46 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.302 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.302 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.302 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.302 09:57:46 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.302 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.302 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.302 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.302 09:57:46 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.302 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.302 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.302 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.302 09:57:46 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.302 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.302 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.302 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.302 09:57:46 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.302 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.302 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.302 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.302 09:57:46 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.302 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.302 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.302 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.302 09:57:46 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.302 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.302 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.302 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.302 09:57:46 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.302 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.302 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.302 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.302 09:57:46 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.302 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.302 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.302 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.302 09:57:46 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.302 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.302 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.302 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.302 09:57:46 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.302 09:57:46 -- setup/common.sh@32 -- # continue 00:05:16.302 09:57:46 -- setup/common.sh@31 -- # IFS=': ' 00:05:16.302 09:57:46 -- setup/common.sh@31 -- # read -r var val _ 00:05:16.302 09:57:46 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:16.302 09:57:46 -- setup/common.sh@33 -- # echo 2048 00:05:16.302 09:57:46 -- setup/common.sh@33 -- # return 0 00:05:16.302 09:57:46 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:16.302 09:57:46 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:16.302 09:57:46 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:16.302 09:57:46 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:16.302 09:57:46 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:16.302 09:57:46 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:16.302 09:57:46 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:16.302 09:57:46 -- setup/hugepages.sh@207 -- # get_nodes 00:05:16.302 09:57:46 -- setup/hugepages.sh@27 -- # local node 00:05:16.302 09:57:46 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:16.302 09:57:46 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:16.302 09:57:46 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:16.302 09:57:46 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:16.302 09:57:46 -- setup/hugepages.sh@208 -- # clear_hp 00:05:16.302 09:57:46 -- setup/hugepages.sh@37 -- # local node hp 00:05:16.302 09:57:46 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:16.302 09:57:46 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:16.302 09:57:46 -- setup/hugepages.sh@41 -- # echo 0 00:05:16.302 09:57:46 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:16.302 09:57:46 -- setup/hugepages.sh@41 -- # echo 0 00:05:16.302 09:57:46 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:16.302 09:57:46 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:16.302 09:57:46 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:16.302 09:57:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:16.302 09:57:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:16.302 09:57:46 -- common/autotest_common.sh@10 -- # set +x 00:05:16.302 ************************************ 00:05:16.302 START TEST default_setup 00:05:16.302 ************************************ 00:05:16.302 09:57:46 -- common/autotest_common.sh@1104 -- # default_setup 00:05:16.302 09:57:46 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:16.302 09:57:46 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:16.302 09:57:46 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:16.302 09:57:46 -- setup/hugepages.sh@51 -- # shift 00:05:16.302 09:57:46 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:16.302 09:57:46 -- setup/hugepages.sh@52 -- # local node_ids 00:05:16.302 09:57:46 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:16.302 09:57:46 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:16.302 09:57:46 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:16.302 09:57:46 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:16.302 09:57:46 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:16.302 09:57:46 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:16.302 09:57:46 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:16.302 09:57:46 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:16.302 09:57:46 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:16.302 09:57:46 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:16.302 09:57:46 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:16.302 09:57:46 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:16.302 09:57:46 -- setup/hugepages.sh@73 -- # return 0 00:05:16.302 09:57:46 -- setup/hugepages.sh@137 -- # setup output 00:05:16.302 09:57:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:16.302 09:57:46 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:16.869 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:16.869 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:17.131 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:17.131 09:57:47 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:17.131 09:57:47 -- setup/hugepages.sh@89 -- # local node 00:05:17.131 09:57:47 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:17.131 09:57:47 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:17.131 09:57:47 -- setup/hugepages.sh@92 -- # local surp 00:05:17.131 09:57:47 -- setup/hugepages.sh@93 -- # local resv 00:05:17.131 09:57:47 -- setup/hugepages.sh@94 -- # local anon 00:05:17.131 09:57:47 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:17.131 09:57:47 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:17.131 09:57:47 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:17.131 09:57:47 -- setup/common.sh@18 -- # local node= 00:05:17.131 09:57:47 -- setup/common.sh@19 -- # local var val 00:05:17.131 09:57:47 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.131 09:57:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.131 09:57:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.131 09:57:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.131 09:57:47 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.131 09:57:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.131 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.131 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.131 09:57:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6566360 kB' 'MemAvailable: 9468184 kB' 'Buffers: 2436 kB' 'Cached: 3103160 kB' 'SwapCached: 0 kB' 'Active: 490236 kB' 'Inactive: 2732900 kB' 'Active(anon): 128012 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732900 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 119176 kB' 'Mapped: 48752 kB' 'Shmem: 10468 kB' 'KReclaimable: 87276 kB' 'Slab: 167020 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79744 kB' 'KernelStack: 6464 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 6109184 kB' 'DirectMap1G: 8388608 kB' 00:05:17.131 09:57:47 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.131 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.131 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.131 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.131 09:57:47 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.131 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.131 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.131 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.131 09:57:47 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.131 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.131 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.131 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.131 09:57:47 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.131 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.131 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.131 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.131 09:57:47 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.131 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.131 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.131 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.131 09:57:47 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.131 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.131 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.131 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.131 09:57:47 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.132 09:57:47 -- setup/common.sh@33 -- # echo 0 00:05:17.132 09:57:47 -- setup/common.sh@33 -- # return 0 00:05:17.132 09:57:47 -- setup/hugepages.sh@97 -- # anon=0 00:05:17.132 09:57:47 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:17.132 09:57:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.132 09:57:47 -- setup/common.sh@18 -- # local node= 00:05:17.132 09:57:47 -- setup/common.sh@19 -- # local var val 00:05:17.132 09:57:47 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.132 09:57:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.132 09:57:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.132 09:57:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.132 09:57:47 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.132 09:57:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6565860 kB' 'MemAvailable: 9467684 kB' 'Buffers: 2436 kB' 'Cached: 3103160 kB' 'SwapCached: 0 kB' 'Active: 489896 kB' 'Inactive: 2732900 kB' 'Active(anon): 127672 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732900 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 118808 kB' 'Mapped: 48752 kB' 'Shmem: 10468 kB' 'KReclaimable: 87276 kB' 'Slab: 167020 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79744 kB' 'KernelStack: 6464 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 6109184 kB' 'DirectMap1G: 8388608 kB' 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.132 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.132 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.133 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.133 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.134 09:57:47 -- setup/common.sh@33 -- # echo 0 00:05:17.134 09:57:47 -- setup/common.sh@33 -- # return 0 00:05:17.134 09:57:47 -- setup/hugepages.sh@99 -- # surp=0 00:05:17.134 09:57:47 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:17.134 09:57:47 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:17.134 09:57:47 -- setup/common.sh@18 -- # local node= 00:05:17.134 09:57:47 -- setup/common.sh@19 -- # local var val 00:05:17.134 09:57:47 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.134 09:57:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.134 09:57:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.134 09:57:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.134 09:57:47 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.134 09:57:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6565104 kB' 'MemAvailable: 9466928 kB' 'Buffers: 2436 kB' 'Cached: 3103160 kB' 'SwapCached: 0 kB' 'Active: 490156 kB' 'Inactive: 2732900 kB' 'Active(anon): 127932 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732900 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 119068 kB' 'Mapped: 48752 kB' 'Shmem: 10468 kB' 'KReclaimable: 87276 kB' 'Slab: 167020 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79744 kB' 'KernelStack: 6464 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 6109184 kB' 'DirectMap1G: 8388608 kB' 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.134 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.134 09:57:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.135 09:57:47 -- setup/common.sh@33 -- # echo 0 00:05:17.135 09:57:47 -- setup/common.sh@33 -- # return 0 00:05:17.135 09:57:47 -- setup/hugepages.sh@100 -- # resv=0 00:05:17.135 nr_hugepages=1024 00:05:17.135 09:57:47 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:17.135 resv_hugepages=0 00:05:17.135 09:57:47 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:17.135 09:57:47 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:17.135 surplus_hugepages=0 00:05:17.135 anon_hugepages=0 00:05:17.135 09:57:47 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:17.135 09:57:47 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:17.135 09:57:47 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:17.135 09:57:47 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:17.135 09:57:47 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:17.135 09:57:47 -- setup/common.sh@18 -- # local node= 00:05:17.135 09:57:47 -- setup/common.sh@19 -- # local var val 00:05:17.135 09:57:47 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.135 09:57:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.135 09:57:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.135 09:57:47 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.135 09:57:47 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.135 09:57:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.135 09:57:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6565104 kB' 'MemAvailable: 9466928 kB' 'Buffers: 2436 kB' 'Cached: 3103160 kB' 'SwapCached: 0 kB' 'Active: 489884 kB' 'Inactive: 2732900 kB' 'Active(anon): 127660 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732900 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 119068 kB' 'Mapped: 48752 kB' 'Shmem: 10468 kB' 'KReclaimable: 87276 kB' 'Slab: 167020 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79744 kB' 'KernelStack: 6516 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54804 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 6109184 kB' 'DirectMap1G: 8388608 kB' 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.135 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.135 09:57:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.136 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.136 09:57:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.137 09:57:47 -- setup/common.sh@33 -- # echo 1024 00:05:17.137 09:57:47 -- setup/common.sh@33 -- # return 0 00:05:17.137 09:57:47 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:17.137 09:57:47 -- setup/hugepages.sh@112 -- # get_nodes 00:05:17.137 09:57:47 -- setup/hugepages.sh@27 -- # local node 00:05:17.137 09:57:47 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:17.137 09:57:47 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:17.137 09:57:47 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:17.137 09:57:47 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:17.137 09:57:47 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:17.137 09:57:47 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:17.137 09:57:47 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:17.137 09:57:47 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.137 09:57:47 -- setup/common.sh@18 -- # local node=0 00:05:17.137 09:57:47 -- setup/common.sh@19 -- # local var val 00:05:17.137 09:57:47 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.137 09:57:47 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.137 09:57:47 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:17.137 09:57:47 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:17.137 09:57:47 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.137 09:57:47 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6565104 kB' 'MemUsed: 5676872 kB' 'SwapCached: 0 kB' 'Active: 489844 kB' 'Inactive: 2732900 kB' 'Active(anon): 127620 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732900 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 3105596 kB' 'Mapped: 48752 kB' 'AnonPages: 119036 kB' 'Shmem: 10468 kB' 'KernelStack: 6500 kB' 'PageTables: 4280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87276 kB' 'Slab: 167020 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79744 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.137 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.137 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.138 09:57:47 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.138 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.138 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.138 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.138 09:57:47 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.138 09:57:47 -- setup/common.sh@32 -- # continue 00:05:17.138 09:57:47 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.138 09:57:47 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.138 09:57:47 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.138 09:57:47 -- setup/common.sh@33 -- # echo 0 00:05:17.138 09:57:47 -- setup/common.sh@33 -- # return 0 00:05:17.138 09:57:47 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:17.138 09:57:47 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:17.138 09:57:47 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:17.138 09:57:47 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:17.138 node0=1024 expecting 1024 00:05:17.138 09:57:47 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:17.138 09:57:47 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:17.138 00:05:17.138 real 0m1.001s 00:05:17.138 user 0m0.481s 00:05:17.138 sys 0m0.465s 00:05:17.138 09:57:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.138 09:57:47 -- common/autotest_common.sh@10 -- # set +x 00:05:17.138 ************************************ 00:05:17.138 END TEST default_setup 00:05:17.138 ************************************ 00:05:17.397 09:57:47 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:17.397 09:57:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:17.397 09:57:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:17.397 09:57:47 -- common/autotest_common.sh@10 -- # set +x 00:05:17.397 ************************************ 00:05:17.397 START TEST per_node_1G_alloc 00:05:17.397 ************************************ 00:05:17.397 09:57:47 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:05:17.397 09:57:47 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:17.397 09:57:47 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:17.397 09:57:47 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:17.397 09:57:47 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:17.397 09:57:47 -- setup/hugepages.sh@51 -- # shift 00:05:17.397 09:57:47 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:17.397 09:57:47 -- setup/hugepages.sh@52 -- # local node_ids 00:05:17.397 09:57:47 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:17.397 09:57:47 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:17.397 09:57:47 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:17.397 09:57:47 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:17.397 09:57:47 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:17.397 09:57:47 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:17.397 09:57:47 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:17.397 09:57:47 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:17.397 09:57:47 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:17.397 09:57:47 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:17.397 09:57:47 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:17.397 09:57:47 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:17.397 09:57:47 -- setup/hugepages.sh@73 -- # return 0 00:05:17.397 09:57:47 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:17.397 09:57:47 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:17.397 09:57:47 -- setup/hugepages.sh@146 -- # setup output 00:05:17.397 09:57:47 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.397 09:57:47 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:17.661 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:17.661 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:17.661 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:17.661 09:57:48 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:17.661 09:57:48 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:17.661 09:57:48 -- setup/hugepages.sh@89 -- # local node 00:05:17.661 09:57:48 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:17.661 09:57:48 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:17.661 09:57:48 -- setup/hugepages.sh@92 -- # local surp 00:05:17.661 09:57:48 -- setup/hugepages.sh@93 -- # local resv 00:05:17.661 09:57:48 -- setup/hugepages.sh@94 -- # local anon 00:05:17.661 09:57:48 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:17.661 09:57:48 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:17.661 09:57:48 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:17.661 09:57:48 -- setup/common.sh@18 -- # local node= 00:05:17.661 09:57:48 -- setup/common.sh@19 -- # local var val 00:05:17.661 09:57:48 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.661 09:57:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.661 09:57:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.661 09:57:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.661 09:57:48 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.661 09:57:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.661 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.661 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.661 09:57:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7615540 kB' 'MemAvailable: 10517368 kB' 'Buffers: 2436 kB' 'Cached: 3103160 kB' 'SwapCached: 0 kB' 'Active: 490460 kB' 'Inactive: 2732904 kB' 'Active(anon): 128236 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732904 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 119352 kB' 'Mapped: 48772 kB' 'Shmem: 10468 kB' 'KReclaimable: 87276 kB' 'Slab: 167024 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79748 kB' 'KernelStack: 6544 kB' 'PageTables: 4344 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 6109184 kB' 'DirectMap1G: 8388608 kB' 00:05:17.661 09:57:48 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.661 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.661 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.661 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.661 09:57:48 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.661 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.661 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.661 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.661 09:57:48 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.661 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.661 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.661 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.661 09:57:48 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.661 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.661 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.661 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.661 09:57:48 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.661 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.661 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.661 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.661 09:57:48 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.661 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.661 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.661 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.661 09:57:48 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.661 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.661 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.661 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.661 09:57:48 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.661 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.661 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.661 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.661 09:57:48 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.661 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.661 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.661 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.662 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.662 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:17.663 09:57:48 -- setup/common.sh@33 -- # echo 0 00:05:17.663 09:57:48 -- setup/common.sh@33 -- # return 0 00:05:17.663 09:57:48 -- setup/hugepages.sh@97 -- # anon=0 00:05:17.663 09:57:48 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:17.663 09:57:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.663 09:57:48 -- setup/common.sh@18 -- # local node= 00:05:17.663 09:57:48 -- setup/common.sh@19 -- # local var val 00:05:17.663 09:57:48 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.663 09:57:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.663 09:57:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.663 09:57:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.663 09:57:48 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.663 09:57:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.663 09:57:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7615540 kB' 'MemAvailable: 10517368 kB' 'Buffers: 2436 kB' 'Cached: 3103160 kB' 'SwapCached: 0 kB' 'Active: 490076 kB' 'Inactive: 2732904 kB' 'Active(anon): 127852 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732904 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 118960 kB' 'Mapped: 48652 kB' 'Shmem: 10468 kB' 'KReclaimable: 87276 kB' 'Slab: 167024 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79748 kB' 'KernelStack: 6480 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 6109184 kB' 'DirectMap1G: 8388608 kB' 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.663 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.663 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.664 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.664 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.665 09:57:48 -- setup/common.sh@33 -- # echo 0 00:05:17.665 09:57:48 -- setup/common.sh@33 -- # return 0 00:05:17.665 09:57:48 -- setup/hugepages.sh@99 -- # surp=0 00:05:17.665 09:57:48 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:17.665 09:57:48 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:17.665 09:57:48 -- setup/common.sh@18 -- # local node= 00:05:17.665 09:57:48 -- setup/common.sh@19 -- # local var val 00:05:17.665 09:57:48 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.665 09:57:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.665 09:57:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.665 09:57:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.665 09:57:48 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.665 09:57:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.665 09:57:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7615540 kB' 'MemAvailable: 10517368 kB' 'Buffers: 2436 kB' 'Cached: 3103160 kB' 'SwapCached: 0 kB' 'Active: 489856 kB' 'Inactive: 2732904 kB' 'Active(anon): 127632 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732904 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 118736 kB' 'Mapped: 48652 kB' 'Shmem: 10468 kB' 'KReclaimable: 87276 kB' 'Slab: 167024 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79748 kB' 'KernelStack: 6480 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 6109184 kB' 'DirectMap1G: 8388608 kB' 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.665 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.665 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.666 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.666 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:17.667 09:57:48 -- setup/common.sh@33 -- # echo 0 00:05:17.667 09:57:48 -- setup/common.sh@33 -- # return 0 00:05:17.667 09:57:48 -- setup/hugepages.sh@100 -- # resv=0 00:05:17.667 nr_hugepages=512 00:05:17.667 09:57:48 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:17.667 resv_hugepages=0 00:05:17.667 09:57:48 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:17.667 surplus_hugepages=0 00:05:17.667 09:57:48 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:17.667 anon_hugepages=0 00:05:17.667 09:57:48 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:17.667 09:57:48 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:17.667 09:57:48 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:17.667 09:57:48 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:17.667 09:57:48 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:17.667 09:57:48 -- setup/common.sh@18 -- # local node= 00:05:17.667 09:57:48 -- setup/common.sh@19 -- # local var val 00:05:17.667 09:57:48 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.667 09:57:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.667 09:57:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.667 09:57:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.667 09:57:48 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.667 09:57:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.667 09:57:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7615540 kB' 'MemAvailable: 10517368 kB' 'Buffers: 2436 kB' 'Cached: 3103160 kB' 'SwapCached: 0 kB' 'Active: 490116 kB' 'Inactive: 2732904 kB' 'Active(anon): 127892 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732904 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 118996 kB' 'Mapped: 48652 kB' 'Shmem: 10468 kB' 'KReclaimable: 87276 kB' 'Slab: 167024 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79748 kB' 'KernelStack: 6480 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 6109184 kB' 'DirectMap1G: 8388608 kB' 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.667 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.667 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.668 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.668 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.962 09:57:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.962 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.962 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.962 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.962 09:57:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.962 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.962 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.962 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.962 09:57:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.962 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.962 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.962 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.962 09:57:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.962 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.962 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.962 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.962 09:57:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.962 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.962 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.962 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.962 09:57:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.962 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.962 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.962 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.962 09:57:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.962 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.962 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.962 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.962 09:57:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.962 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.962 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.962 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.962 09:57:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.962 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.962 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.962 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.962 09:57:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.962 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.962 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.962 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.962 09:57:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:17.962 09:57:48 -- setup/common.sh@33 -- # echo 512 00:05:17.962 09:57:48 -- setup/common.sh@33 -- # return 0 00:05:17.962 09:57:48 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:17.962 09:57:48 -- setup/hugepages.sh@112 -- # get_nodes 00:05:17.962 09:57:48 -- setup/hugepages.sh@27 -- # local node 00:05:17.962 09:57:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:17.962 09:57:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:17.962 09:57:48 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:17.962 09:57:48 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:17.962 09:57:48 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:17.962 09:57:48 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:17.962 09:57:48 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:17.962 09:57:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:17.962 09:57:48 -- setup/common.sh@18 -- # local node=0 00:05:17.962 09:57:48 -- setup/common.sh@19 -- # local var val 00:05:17.962 09:57:48 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.962 09:57:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.962 09:57:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:17.962 09:57:48 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:17.962 09:57:48 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.962 09:57:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.962 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.962 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7615540 kB' 'MemUsed: 4626436 kB' 'SwapCached: 0 kB' 'Active: 489736 kB' 'Inactive: 2732904 kB' 'Active(anon): 127512 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732904 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 3105596 kB' 'Mapped: 48652 kB' 'AnonPages: 118664 kB' 'Shmem: 10468 kB' 'KernelStack: 6464 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87276 kB' 'Slab: 167020 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79744 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # continue 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.963 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.963 09:57:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:17.963 09:57:48 -- setup/common.sh@33 -- # echo 0 00:05:17.963 09:57:48 -- setup/common.sh@33 -- # return 0 00:05:17.963 09:57:48 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:17.963 09:57:48 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:17.963 09:57:48 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:17.963 09:57:48 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:17.963 node0=512 expecting 512 00:05:17.963 09:57:48 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:17.963 09:57:48 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:17.963 00:05:17.963 real 0m0.515s 00:05:17.963 user 0m0.252s 00:05:17.963 sys 0m0.300s 00:05:17.963 09:57:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.963 09:57:48 -- common/autotest_common.sh@10 -- # set +x 00:05:17.963 ************************************ 00:05:17.963 END TEST per_node_1G_alloc 00:05:17.963 ************************************ 00:05:17.963 09:57:48 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:17.963 09:57:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:17.963 09:57:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:17.963 09:57:48 -- common/autotest_common.sh@10 -- # set +x 00:05:17.963 ************************************ 00:05:17.963 START TEST even_2G_alloc 00:05:17.963 ************************************ 00:05:17.963 09:57:48 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:05:17.963 09:57:48 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:17.963 09:57:48 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:17.963 09:57:48 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:17.964 09:57:48 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:17.964 09:57:48 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:17.964 09:57:48 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:17.964 09:57:48 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:17.964 09:57:48 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:17.964 09:57:48 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:17.964 09:57:48 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:17.964 09:57:48 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:17.964 09:57:48 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:17.964 09:57:48 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:17.964 09:57:48 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:17.964 09:57:48 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:17.964 09:57:48 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:17.964 09:57:48 -- setup/hugepages.sh@83 -- # : 0 00:05:17.964 09:57:48 -- setup/hugepages.sh@84 -- # : 0 00:05:17.964 09:57:48 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:17.964 09:57:48 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:17.964 09:57:48 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:17.964 09:57:48 -- setup/hugepages.sh@153 -- # setup output 00:05:17.964 09:57:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.964 09:57:48 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:18.225 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:18.225 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:18.225 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:18.225 09:57:48 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:18.225 09:57:48 -- setup/hugepages.sh@89 -- # local node 00:05:18.225 09:57:48 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:18.225 09:57:48 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:18.225 09:57:48 -- setup/hugepages.sh@92 -- # local surp 00:05:18.225 09:57:48 -- setup/hugepages.sh@93 -- # local resv 00:05:18.225 09:57:48 -- setup/hugepages.sh@94 -- # local anon 00:05:18.225 09:57:48 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:18.225 09:57:48 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:18.225 09:57:48 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:18.225 09:57:48 -- setup/common.sh@18 -- # local node= 00:05:18.225 09:57:48 -- setup/common.sh@19 -- # local var val 00:05:18.225 09:57:48 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.225 09:57:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.225 09:57:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.225 09:57:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.225 09:57:48 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.225 09:57:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.225 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.225 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.225 09:57:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6569736 kB' 'MemAvailable: 9471564 kB' 'Buffers: 2436 kB' 'Cached: 3103160 kB' 'SwapCached: 0 kB' 'Active: 490300 kB' 'Inactive: 2732904 kB' 'Active(anon): 128076 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732904 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 119192 kB' 'Mapped: 48780 kB' 'Shmem: 10468 kB' 'KReclaimable: 87276 kB' 'Slab: 167128 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79852 kB' 'KernelStack: 6488 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 6109184 kB' 'DirectMap1G: 8388608 kB' 00:05:18.225 09:57:48 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.225 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.225 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.225 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.225 09:57:48 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.225 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.225 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.225 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.225 09:57:48 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.225 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.225 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.225 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.225 09:57:48 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.225 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.225 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.225 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.225 09:57:48 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.225 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.225 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.225 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.225 09:57:48 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.225 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.225 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.225 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.225 09:57:48 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.225 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.225 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.225 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.225 09:57:48 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.225 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.225 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.225 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.225 09:57:48 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.225 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.225 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.225 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.225 09:57:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.225 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.225 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.225 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.225 09:57:48 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.225 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.225 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.225 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.225 09:57:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.225 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.225 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.225 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.225 09:57:48 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.225 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.225 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.225 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.225 09:57:48 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.225 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.226 09:57:48 -- setup/common.sh@33 -- # echo 0 00:05:18.226 09:57:48 -- setup/common.sh@33 -- # return 0 00:05:18.226 09:57:48 -- setup/hugepages.sh@97 -- # anon=0 00:05:18.226 09:57:48 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:18.226 09:57:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:18.226 09:57:48 -- setup/common.sh@18 -- # local node= 00:05:18.226 09:57:48 -- setup/common.sh@19 -- # local var val 00:05:18.226 09:57:48 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.226 09:57:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.226 09:57:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.226 09:57:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.226 09:57:48 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.226 09:57:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6569736 kB' 'MemAvailable: 9471564 kB' 'Buffers: 2436 kB' 'Cached: 3103160 kB' 'SwapCached: 0 kB' 'Active: 489808 kB' 'Inactive: 2732904 kB' 'Active(anon): 127584 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732904 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 118992 kB' 'Mapped: 49040 kB' 'Shmem: 10468 kB' 'KReclaimable: 87276 kB' 'Slab: 167128 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79852 kB' 'KernelStack: 6488 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 345736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 6109184 kB' 'DirectMap1G: 8388608 kB' 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.226 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.226 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.227 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.227 09:57:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.227 09:57:48 -- setup/common.sh@33 -- # echo 0 00:05:18.227 09:57:48 -- setup/common.sh@33 -- # return 0 00:05:18.227 09:57:48 -- setup/hugepages.sh@99 -- # surp=0 00:05:18.227 09:57:48 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:18.228 09:57:48 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:18.228 09:57:48 -- setup/common.sh@18 -- # local node= 00:05:18.228 09:57:48 -- setup/common.sh@19 -- # local var val 00:05:18.228 09:57:48 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.228 09:57:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.228 09:57:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.228 09:57:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.228 09:57:48 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.228 09:57:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6569484 kB' 'MemAvailable: 9471312 kB' 'Buffers: 2436 kB' 'Cached: 3103164 kB' 'SwapCached: 0 kB' 'Active: 489896 kB' 'Inactive: 2732904 kB' 'Active(anon): 127672 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732904 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 119032 kB' 'Mapped: 48652 kB' 'Shmem: 10468 kB' 'KReclaimable: 87276 kB' 'Slab: 167120 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79844 kB' 'KernelStack: 6484 kB' 'PageTables: 4248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 6109184 kB' 'DirectMap1G: 8388608 kB' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.228 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.228 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.229 09:57:48 -- setup/common.sh@33 -- # echo 0 00:05:18.229 09:57:48 -- setup/common.sh@33 -- # return 0 00:05:18.229 09:57:48 -- setup/hugepages.sh@100 -- # resv=0 00:05:18.229 nr_hugepages=1024 00:05:18.229 09:57:48 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:18.229 resv_hugepages=0 00:05:18.229 09:57:48 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:18.229 surplus_hugepages=0 00:05:18.229 09:57:48 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:18.229 anon_hugepages=0 00:05:18.229 09:57:48 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:18.229 09:57:48 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:18.229 09:57:48 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:18.229 09:57:48 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:18.229 09:57:48 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:18.229 09:57:48 -- setup/common.sh@18 -- # local node= 00:05:18.229 09:57:48 -- setup/common.sh@19 -- # local var val 00:05:18.229 09:57:48 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.229 09:57:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.229 09:57:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.229 09:57:48 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.229 09:57:48 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.229 09:57:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.229 09:57:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6569484 kB' 'MemAvailable: 9471312 kB' 'Buffers: 2436 kB' 'Cached: 3103164 kB' 'SwapCached: 0 kB' 'Active: 490136 kB' 'Inactive: 2732904 kB' 'Active(anon): 127912 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732904 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 119016 kB' 'Mapped: 48648 kB' 'Shmem: 10468 kB' 'KReclaimable: 87276 kB' 'Slab: 167112 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79836 kB' 'KernelStack: 6480 kB' 'PageTables: 4164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 6109184 kB' 'DirectMap1G: 8388608 kB' 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.229 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.229 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.230 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.230 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.489 09:57:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.490 09:57:48 -- setup/common.sh@33 -- # echo 1024 00:05:18.490 09:57:48 -- setup/common.sh@33 -- # return 0 00:05:18.490 09:57:48 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:18.490 09:57:48 -- setup/hugepages.sh@112 -- # get_nodes 00:05:18.490 09:57:48 -- setup/hugepages.sh@27 -- # local node 00:05:18.490 09:57:48 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:18.490 09:57:48 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:18.490 09:57:48 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:18.490 09:57:48 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:18.490 09:57:48 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:18.490 09:57:48 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:18.490 09:57:48 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:18.490 09:57:48 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:18.490 09:57:48 -- setup/common.sh@18 -- # local node=0 00:05:18.490 09:57:48 -- setup/common.sh@19 -- # local var val 00:05:18.490 09:57:48 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.490 09:57:48 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.490 09:57:48 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:18.490 09:57:48 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:18.490 09:57:48 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.490 09:57:48 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6569484 kB' 'MemUsed: 5672492 kB' 'SwapCached: 0 kB' 'Active: 489788 kB' 'Inactive: 2732904 kB' 'Active(anon): 127564 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732904 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 3105600 kB' 'Mapped: 48648 kB' 'AnonPages: 118748 kB' 'Shmem: 10468 kB' 'KernelStack: 6496 kB' 'PageTables: 4212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87276 kB' 'Slab: 167112 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79836 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.490 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.490 09:57:48 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # continue 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.491 09:57:48 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.491 09:57:48 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.491 09:57:48 -- setup/common.sh@33 -- # echo 0 00:05:18.491 09:57:48 -- setup/common.sh@33 -- # return 0 00:05:18.491 09:57:48 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:18.491 09:57:48 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:18.491 09:57:48 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:18.491 09:57:48 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:18.491 node0=1024 expecting 1024 00:05:18.491 09:57:48 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:18.491 09:57:48 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:18.491 00:05:18.491 real 0m0.527s 00:05:18.491 user 0m0.268s 00:05:18.491 sys 0m0.291s 00:05:18.491 09:57:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.491 09:57:48 -- common/autotest_common.sh@10 -- # set +x 00:05:18.491 ************************************ 00:05:18.491 END TEST even_2G_alloc 00:05:18.491 ************************************ 00:05:18.491 09:57:48 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:18.491 09:57:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:18.491 09:57:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.491 09:57:48 -- common/autotest_common.sh@10 -- # set +x 00:05:18.491 ************************************ 00:05:18.491 START TEST odd_alloc 00:05:18.491 ************************************ 00:05:18.491 09:57:48 -- common/autotest_common.sh@1104 -- # odd_alloc 00:05:18.491 09:57:48 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:18.491 09:57:48 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:18.491 09:57:48 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:18.491 09:57:48 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:18.491 09:57:48 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:18.491 09:57:48 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:18.491 09:57:48 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:18.491 09:57:48 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:18.491 09:57:48 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:18.491 09:57:48 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:18.491 09:57:48 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:18.491 09:57:48 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:18.491 09:57:48 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:18.491 09:57:48 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:18.491 09:57:48 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:18.491 09:57:48 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:18.491 09:57:48 -- setup/hugepages.sh@83 -- # : 0 00:05:18.491 09:57:48 -- setup/hugepages.sh@84 -- # : 0 00:05:18.491 09:57:48 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:18.491 09:57:48 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:18.491 09:57:48 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:18.491 09:57:48 -- setup/hugepages.sh@160 -- # setup output 00:05:18.491 09:57:48 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.491 09:57:48 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:18.751 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:18.751 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:18.751 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:18.751 09:57:49 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:18.751 09:57:49 -- setup/hugepages.sh@89 -- # local node 00:05:18.751 09:57:49 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:18.751 09:57:49 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:18.751 09:57:49 -- setup/hugepages.sh@92 -- # local surp 00:05:18.751 09:57:49 -- setup/hugepages.sh@93 -- # local resv 00:05:18.751 09:57:49 -- setup/hugepages.sh@94 -- # local anon 00:05:18.751 09:57:49 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:18.751 09:57:49 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:18.751 09:57:49 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:18.751 09:57:49 -- setup/common.sh@18 -- # local node= 00:05:18.751 09:57:49 -- setup/common.sh@19 -- # local var val 00:05:18.751 09:57:49 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.751 09:57:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.751 09:57:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.751 09:57:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.751 09:57:49 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.751 09:57:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.751 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.751 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.751 09:57:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6570760 kB' 'MemAvailable: 9472592 kB' 'Buffers: 2436 kB' 'Cached: 3103164 kB' 'SwapCached: 0 kB' 'Active: 490304 kB' 'Inactive: 2732908 kB' 'Active(anon): 128080 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 119224 kB' 'Mapped: 48812 kB' 'Shmem: 10468 kB' 'KReclaimable: 87276 kB' 'Slab: 167112 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79836 kB' 'KernelStack: 6520 kB' 'PageTables: 4188 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 346104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 6109184 kB' 'DirectMap1G: 8388608 kB' 00:05:18.751 09:57:49 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.751 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.751 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.751 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.751 09:57:49 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.751 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.751 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.751 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.751 09:57:49 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.751 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.751 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.751 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.751 09:57:49 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.751 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.751 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.751 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.751 09:57:49 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.751 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.751 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.751 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.751 09:57:49 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.751 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.751 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.751 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.751 09:57:49 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.752 09:57:49 -- setup/common.sh@33 -- # echo 0 00:05:18.752 09:57:49 -- setup/common.sh@33 -- # return 0 00:05:18.752 09:57:49 -- setup/hugepages.sh@97 -- # anon=0 00:05:18.752 09:57:49 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:18.752 09:57:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:18.752 09:57:49 -- setup/common.sh@18 -- # local node= 00:05:18.752 09:57:49 -- setup/common.sh@19 -- # local var val 00:05:18.752 09:57:49 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.752 09:57:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.752 09:57:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.752 09:57:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.752 09:57:49 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.752 09:57:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6570760 kB' 'MemAvailable: 9472592 kB' 'Buffers: 2436 kB' 'Cached: 3103164 kB' 'SwapCached: 0 kB' 'Active: 489828 kB' 'Inactive: 2732908 kB' 'Active(anon): 127604 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 118744 kB' 'Mapped: 48752 kB' 'Shmem: 10468 kB' 'KReclaimable: 87276 kB' 'Slab: 167112 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79836 kB' 'KernelStack: 6456 kB' 'PageTables: 4028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 346104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 6109184 kB' 'DirectMap1G: 8388608 kB' 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.752 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.752 09:57:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.753 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.753 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.754 09:57:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.754 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.754 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.754 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.754 09:57:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.754 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.754 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.754 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.754 09:57:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.754 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.754 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.754 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.754 09:57:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.754 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.754 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.754 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.754 09:57:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.754 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.754 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.754 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.754 09:57:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.754 09:57:49 -- setup/common.sh@32 -- # continue 00:05:18.754 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.754 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.754 09:57:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.754 09:57:49 -- setup/common.sh@33 -- # echo 0 00:05:18.754 09:57:49 -- setup/common.sh@33 -- # return 0 00:05:18.754 09:57:49 -- setup/hugepages.sh@99 -- # surp=0 00:05:18.754 09:57:49 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:18.754 09:57:49 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:18.754 09:57:49 -- setup/common.sh@18 -- # local node= 00:05:19.015 09:57:49 -- setup/common.sh@19 -- # local var val 00:05:19.015 09:57:49 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.015 09:57:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.015 09:57:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.015 09:57:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.015 09:57:49 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.015 09:57:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.015 09:57:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6570760 kB' 'MemAvailable: 9472592 kB' 'Buffers: 2436 kB' 'Cached: 3103164 kB' 'SwapCached: 0 kB' 'Active: 489948 kB' 'Inactive: 2732908 kB' 'Active(anon): 127724 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 118628 kB' 'Mapped: 48752 kB' 'Shmem: 10468 kB' 'KReclaimable: 87276 kB' 'Slab: 167104 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79828 kB' 'KernelStack: 6456 kB' 'PageTables: 4028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 346104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 6109184 kB' 'DirectMap1G: 8388608 kB' 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.015 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.015 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.016 09:57:49 -- setup/common.sh@33 -- # echo 0 00:05:19.016 09:57:49 -- setup/common.sh@33 -- # return 0 00:05:19.016 09:57:49 -- setup/hugepages.sh@100 -- # resv=0 00:05:19.016 09:57:49 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:19.016 nr_hugepages=1025 00:05:19.016 resv_hugepages=0 00:05:19.016 09:57:49 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:19.016 surplus_hugepages=0 00:05:19.016 09:57:49 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:19.016 anon_hugepages=0 00:05:19.016 09:57:49 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:19.016 09:57:49 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:19.016 09:57:49 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:19.016 09:57:49 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:19.016 09:57:49 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:19.016 09:57:49 -- setup/common.sh@18 -- # local node= 00:05:19.016 09:57:49 -- setup/common.sh@19 -- # local var val 00:05:19.016 09:57:49 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.016 09:57:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.016 09:57:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.016 09:57:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.016 09:57:49 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.016 09:57:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6570760 kB' 'MemAvailable: 9472592 kB' 'Buffers: 2436 kB' 'Cached: 3103164 kB' 'SwapCached: 0 kB' 'Active: 490012 kB' 'Inactive: 2732908 kB' 'Active(anon): 127788 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 118948 kB' 'Mapped: 48588 kB' 'Shmem: 10468 kB' 'KReclaimable: 87276 kB' 'Slab: 167100 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79824 kB' 'KernelStack: 6464 kB' 'PageTables: 4116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 346104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 6109184 kB' 'DirectMap1G: 8388608 kB' 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.016 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.016 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.017 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.017 09:57:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.018 09:57:49 -- setup/common.sh@33 -- # echo 1025 00:05:19.018 09:57:49 -- setup/common.sh@33 -- # return 0 00:05:19.018 09:57:49 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:19.018 09:57:49 -- setup/hugepages.sh@112 -- # get_nodes 00:05:19.018 09:57:49 -- setup/hugepages.sh@27 -- # local node 00:05:19.018 09:57:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:19.018 09:57:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:19.018 09:57:49 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:19.018 09:57:49 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:19.018 09:57:49 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:19.018 09:57:49 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:19.018 09:57:49 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:19.018 09:57:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.018 09:57:49 -- setup/common.sh@18 -- # local node=0 00:05:19.018 09:57:49 -- setup/common.sh@19 -- # local var val 00:05:19.018 09:57:49 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.018 09:57:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.018 09:57:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:19.018 09:57:49 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:19.018 09:57:49 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.018 09:57:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6571036 kB' 'MemUsed: 5670940 kB' 'SwapCached: 0 kB' 'Active: 489944 kB' 'Inactive: 2732908 kB' 'Active(anon): 127720 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'FilePages: 3105600 kB' 'Mapped: 48648 kB' 'AnonPages: 118936 kB' 'Shmem: 10468 kB' 'KernelStack: 6448 kB' 'PageTables: 4068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87276 kB' 'Slab: 167104 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79828 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.018 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.018 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.019 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.019 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.019 09:57:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.019 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.019 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.019 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.019 09:57:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.019 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.019 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.019 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.019 09:57:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.019 09:57:49 -- setup/common.sh@33 -- # echo 0 00:05:19.019 09:57:49 -- setup/common.sh@33 -- # return 0 00:05:19.019 09:57:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:19.019 09:57:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:19.019 09:57:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:19.019 09:57:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:19.019 node0=1025 expecting 1025 00:05:19.019 09:57:49 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:19.019 09:57:49 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:19.019 00:05:19.019 real 0m0.531s 00:05:19.019 user 0m0.266s 00:05:19.019 sys 0m0.304s 00:05:19.019 09:57:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.019 09:57:49 -- common/autotest_common.sh@10 -- # set +x 00:05:19.019 ************************************ 00:05:19.019 END TEST odd_alloc 00:05:19.019 ************************************ 00:05:19.019 09:57:49 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:19.019 09:57:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:19.019 09:57:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.019 09:57:49 -- common/autotest_common.sh@10 -- # set +x 00:05:19.019 ************************************ 00:05:19.019 START TEST custom_alloc 00:05:19.019 ************************************ 00:05:19.019 09:57:49 -- common/autotest_common.sh@1104 -- # custom_alloc 00:05:19.019 09:57:49 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:19.019 09:57:49 -- setup/hugepages.sh@169 -- # local node 00:05:19.019 09:57:49 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:19.019 09:57:49 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:19.019 09:57:49 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:19.019 09:57:49 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:19.019 09:57:49 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:19.019 09:57:49 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:19.019 09:57:49 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:19.019 09:57:49 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:19.019 09:57:49 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:19.019 09:57:49 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:19.019 09:57:49 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:19.019 09:57:49 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:19.019 09:57:49 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:19.019 09:57:49 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:19.019 09:57:49 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:19.019 09:57:49 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:19.019 09:57:49 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:19.019 09:57:49 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:19.019 09:57:49 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:19.019 09:57:49 -- setup/hugepages.sh@83 -- # : 0 00:05:19.019 09:57:49 -- setup/hugepages.sh@84 -- # : 0 00:05:19.019 09:57:49 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:19.019 09:57:49 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:19.019 09:57:49 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:19.019 09:57:49 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:19.019 09:57:49 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:19.019 09:57:49 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:19.019 09:57:49 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:19.019 09:57:49 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:19.019 09:57:49 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:19.019 09:57:49 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:19.019 09:57:49 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:19.019 09:57:49 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:19.019 09:57:49 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:19.019 09:57:49 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:19.019 09:57:49 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:19.019 09:57:49 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:19.019 09:57:49 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:19.019 09:57:49 -- setup/hugepages.sh@78 -- # return 0 00:05:19.019 09:57:49 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:19.019 09:57:49 -- setup/hugepages.sh@187 -- # setup output 00:05:19.019 09:57:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.019 09:57:49 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:19.278 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:19.278 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:19.278 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:19.278 09:57:49 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:19.278 09:57:49 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:19.278 09:57:49 -- setup/hugepages.sh@89 -- # local node 00:05:19.278 09:57:49 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:19.278 09:57:49 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:19.278 09:57:49 -- setup/hugepages.sh@92 -- # local surp 00:05:19.278 09:57:49 -- setup/hugepages.sh@93 -- # local resv 00:05:19.278 09:57:49 -- setup/hugepages.sh@94 -- # local anon 00:05:19.278 09:57:49 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:19.278 09:57:49 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:19.278 09:57:49 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:19.278 09:57:49 -- setup/common.sh@18 -- # local node= 00:05:19.278 09:57:49 -- setup/common.sh@19 -- # local var val 00:05:19.278 09:57:49 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.278 09:57:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.278 09:57:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.278 09:57:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.278 09:57:49 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.278 09:57:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.278 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.278 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.278 09:57:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7615652 kB' 'MemAvailable: 10517484 kB' 'Buffers: 2436 kB' 'Cached: 3103164 kB' 'SwapCached: 0 kB' 'Active: 490464 kB' 'Inactive: 2732908 kB' 'Active(anon): 128240 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 119164 kB' 'Mapped: 48692 kB' 'Shmem: 10468 kB' 'KReclaimable: 87276 kB' 'Slab: 167068 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79792 kB' 'KernelStack: 6568 kB' 'PageTables: 4300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 6109184 kB' 'DirectMap1G: 8388608 kB' 00:05:19.278 09:57:49 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.278 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.278 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.278 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.278 09:57:49 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.278 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.278 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.278 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.278 09:57:49 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.278 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.278 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.278 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.278 09:57:49 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.278 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.278 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.278 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.278 09:57:49 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.541 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.541 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.542 09:57:49 -- setup/common.sh@33 -- # echo 0 00:05:19.542 09:57:49 -- setup/common.sh@33 -- # return 0 00:05:19.542 09:57:49 -- setup/hugepages.sh@97 -- # anon=0 00:05:19.542 09:57:49 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:19.542 09:57:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.542 09:57:49 -- setup/common.sh@18 -- # local node= 00:05:19.542 09:57:49 -- setup/common.sh@19 -- # local var val 00:05:19.542 09:57:49 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.542 09:57:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.542 09:57:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.542 09:57:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.542 09:57:49 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.542 09:57:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.542 09:57:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7615904 kB' 'MemAvailable: 10517736 kB' 'Buffers: 2436 kB' 'Cached: 3103164 kB' 'SwapCached: 0 kB' 'Active: 490604 kB' 'Inactive: 2732908 kB' 'Active(anon): 128380 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 232 kB' 'Writeback: 0 kB' 'AnonPages: 119076 kB' 'Mapped: 48752 kB' 'Shmem: 10468 kB' 'KReclaimable: 87276 kB' 'Slab: 167080 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79804 kB' 'KernelStack: 6536 kB' 'PageTables: 4192 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 6109184 kB' 'DirectMap1G: 8388608 kB' 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.542 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.542 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.543 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.543 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.544 09:57:49 -- setup/common.sh@33 -- # echo 0 00:05:19.544 09:57:49 -- setup/common.sh@33 -- # return 0 00:05:19.544 09:57:49 -- setup/hugepages.sh@99 -- # surp=0 00:05:19.544 09:57:49 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:19.544 09:57:49 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:19.544 09:57:49 -- setup/common.sh@18 -- # local node= 00:05:19.544 09:57:49 -- setup/common.sh@19 -- # local var val 00:05:19.544 09:57:49 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.544 09:57:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.544 09:57:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.544 09:57:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.544 09:57:49 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.544 09:57:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.544 09:57:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7615904 kB' 'MemAvailable: 10517736 kB' 'Buffers: 2436 kB' 'Cached: 3103164 kB' 'SwapCached: 0 kB' 'Active: 490128 kB' 'Inactive: 2732908 kB' 'Active(anon): 127904 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119048 kB' 'Mapped: 48648 kB' 'Shmem: 10468 kB' 'KReclaimable: 87276 kB' 'Slab: 167068 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79792 kB' 'KernelStack: 6496 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 6109184 kB' 'DirectMap1G: 8388608 kB' 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.544 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.544 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.545 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.545 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.546 09:57:49 -- setup/common.sh@33 -- # echo 0 00:05:19.546 09:57:49 -- setup/common.sh@33 -- # return 0 00:05:19.546 09:57:49 -- setup/hugepages.sh@100 -- # resv=0 00:05:19.546 nr_hugepages=512 00:05:19.546 09:57:49 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:19.546 resv_hugepages=0 00:05:19.546 09:57:49 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:19.546 surplus_hugepages=0 00:05:19.546 09:57:49 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:19.546 anon_hugepages=0 00:05:19.546 09:57:49 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:19.546 09:57:49 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:19.546 09:57:49 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:19.546 09:57:49 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:19.546 09:57:49 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:19.546 09:57:49 -- setup/common.sh@18 -- # local node= 00:05:19.546 09:57:49 -- setup/common.sh@19 -- # local var val 00:05:19.546 09:57:49 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.546 09:57:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.546 09:57:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.546 09:57:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.546 09:57:49 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.546 09:57:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.546 09:57:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7615904 kB' 'MemAvailable: 10517736 kB' 'Buffers: 2436 kB' 'Cached: 3103164 kB' 'SwapCached: 0 kB' 'Active: 490128 kB' 'Inactive: 2732908 kB' 'Active(anon): 127904 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119052 kB' 'Mapped: 48648 kB' 'Shmem: 10468 kB' 'KReclaimable: 87276 kB' 'Slab: 167064 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79788 kB' 'KernelStack: 6496 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 346104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 6109184 kB' 'DirectMap1G: 8388608 kB' 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.546 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.546 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.547 09:57:49 -- setup/common.sh@33 -- # echo 512 00:05:19.547 09:57:49 -- setup/common.sh@33 -- # return 0 00:05:19.547 09:57:49 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:19.547 09:57:49 -- setup/hugepages.sh@112 -- # get_nodes 00:05:19.547 09:57:49 -- setup/hugepages.sh@27 -- # local node 00:05:19.547 09:57:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:19.547 09:57:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:19.547 09:57:49 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:19.547 09:57:49 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:19.547 09:57:49 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:19.547 09:57:49 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:19.547 09:57:49 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:19.547 09:57:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.547 09:57:49 -- setup/common.sh@18 -- # local node=0 00:05:19.547 09:57:49 -- setup/common.sh@19 -- # local var val 00:05:19.547 09:57:49 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.547 09:57:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.547 09:57:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:19.547 09:57:49 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:19.547 09:57:49 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.547 09:57:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 7615904 kB' 'MemUsed: 4626072 kB' 'SwapCached: 0 kB' 'Active: 490180 kB' 'Inactive: 2732908 kB' 'Active(anon): 127956 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3105600 kB' 'Mapped: 48648 kB' 'AnonPages: 119116 kB' 'Shmem: 10468 kB' 'KernelStack: 6512 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87276 kB' 'Slab: 167064 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79788 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.547 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.547 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # continue 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.548 09:57:49 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.548 09:57:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.548 09:57:49 -- setup/common.sh@33 -- # echo 0 00:05:19.548 09:57:49 -- setup/common.sh@33 -- # return 0 00:05:19.548 09:57:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:19.548 09:57:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:19.548 09:57:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:19.548 09:57:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:19.548 node0=512 expecting 512 00:05:19.548 09:57:49 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:19.548 09:57:49 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:19.548 00:05:19.548 real 0m0.529s 00:05:19.548 user 0m0.276s 00:05:19.548 sys 0m0.287s 00:05:19.548 09:57:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.548 09:57:49 -- common/autotest_common.sh@10 -- # set +x 00:05:19.548 ************************************ 00:05:19.548 END TEST custom_alloc 00:05:19.548 ************************************ 00:05:19.548 09:57:49 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:19.548 09:57:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:19.548 09:57:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.548 09:57:50 -- common/autotest_common.sh@10 -- # set +x 00:05:19.548 ************************************ 00:05:19.548 START TEST no_shrink_alloc 00:05:19.548 ************************************ 00:05:19.548 09:57:50 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:05:19.548 09:57:50 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:19.548 09:57:50 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:19.548 09:57:50 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:19.548 09:57:50 -- setup/hugepages.sh@51 -- # shift 00:05:19.548 09:57:50 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:19.548 09:57:50 -- setup/hugepages.sh@52 -- # local node_ids 00:05:19.548 09:57:50 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:19.548 09:57:50 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:19.548 09:57:50 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:19.548 09:57:50 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:19.548 09:57:50 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:19.548 09:57:50 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:19.548 09:57:50 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:19.548 09:57:50 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:19.548 09:57:50 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:19.548 09:57:50 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:19.548 09:57:50 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:19.548 09:57:50 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:19.548 09:57:50 -- setup/hugepages.sh@73 -- # return 0 00:05:19.548 09:57:50 -- setup/hugepages.sh@198 -- # setup output 00:05:19.548 09:57:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.548 09:57:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:19.807 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:20.070 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.070 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.070 09:57:50 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:20.070 09:57:50 -- setup/hugepages.sh@89 -- # local node 00:05:20.070 09:57:50 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:20.070 09:57:50 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:20.070 09:57:50 -- setup/hugepages.sh@92 -- # local surp 00:05:20.070 09:57:50 -- setup/hugepages.sh@93 -- # local resv 00:05:20.070 09:57:50 -- setup/hugepages.sh@94 -- # local anon 00:05:20.070 09:57:50 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:20.070 09:57:50 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:20.070 09:57:50 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:20.070 09:57:50 -- setup/common.sh@18 -- # local node= 00:05:20.070 09:57:50 -- setup/common.sh@19 -- # local var val 00:05:20.070 09:57:50 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.070 09:57:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.070 09:57:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.070 09:57:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.070 09:57:50 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.070 09:57:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.070 09:57:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6573144 kB' 'MemAvailable: 9474976 kB' 'Buffers: 2436 kB' 'Cached: 3103164 kB' 'SwapCached: 0 kB' 'Active: 490928 kB' 'Inactive: 2732908 kB' 'Active(anon): 128704 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119796 kB' 'Mapped: 48852 kB' 'Shmem: 10468 kB' 'KReclaimable: 87276 kB' 'Slab: 167028 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79752 kB' 'KernelStack: 6472 kB' 'PageTables: 3960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 6109184 kB' 'DirectMap1G: 8388608 kB' 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.070 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.070 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.071 09:57:50 -- setup/common.sh@33 -- # echo 0 00:05:20.071 09:57:50 -- setup/common.sh@33 -- # return 0 00:05:20.071 09:57:50 -- setup/hugepages.sh@97 -- # anon=0 00:05:20.071 09:57:50 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:20.071 09:57:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.071 09:57:50 -- setup/common.sh@18 -- # local node= 00:05:20.071 09:57:50 -- setup/common.sh@19 -- # local var val 00:05:20.071 09:57:50 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.071 09:57:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.071 09:57:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.071 09:57:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.071 09:57:50 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.071 09:57:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6573032 kB' 'MemAvailable: 9474864 kB' 'Buffers: 2436 kB' 'Cached: 3103164 kB' 'SwapCached: 0 kB' 'Active: 490152 kB' 'Inactive: 2732908 kB' 'Active(anon): 127928 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119000 kB' 'Mapped: 48784 kB' 'Shmem: 10468 kB' 'KReclaimable: 87276 kB' 'Slab: 167080 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79804 kB' 'KernelStack: 6496 kB' 'PageTables: 4220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 6109184 kB' 'DirectMap1G: 8388608 kB' 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.071 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.071 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.072 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.072 09:57:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.072 09:57:50 -- setup/common.sh@33 -- # echo 0 00:05:20.072 09:57:50 -- setup/common.sh@33 -- # return 0 00:05:20.072 09:57:50 -- setup/hugepages.sh@99 -- # surp=0 00:05:20.072 09:57:50 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:20.072 09:57:50 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:20.072 09:57:50 -- setup/common.sh@18 -- # local node= 00:05:20.072 09:57:50 -- setup/common.sh@19 -- # local var val 00:05:20.072 09:57:50 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.072 09:57:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.072 09:57:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.072 09:57:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.072 09:57:50 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.072 09:57:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.073 09:57:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6573032 kB' 'MemAvailable: 9474864 kB' 'Buffers: 2436 kB' 'Cached: 3103164 kB' 'SwapCached: 0 kB' 'Active: 490168 kB' 'Inactive: 2732908 kB' 'Active(anon): 127944 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119016 kB' 'Mapped: 48648 kB' 'Shmem: 10468 kB' 'KReclaimable: 87276 kB' 'Slab: 167080 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79804 kB' 'KernelStack: 6480 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 6109184 kB' 'DirectMap1G: 8388608 kB' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.073 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.073 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.074 09:57:50 -- setup/common.sh@33 -- # echo 0 00:05:20.074 09:57:50 -- setup/common.sh@33 -- # return 0 00:05:20.074 09:57:50 -- setup/hugepages.sh@100 -- # resv=0 00:05:20.074 nr_hugepages=1024 00:05:20.074 09:57:50 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:20.074 resv_hugepages=0 00:05:20.074 09:57:50 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:20.074 surplus_hugepages=0 00:05:20.074 09:57:50 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:20.074 anon_hugepages=0 00:05:20.074 09:57:50 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:20.074 09:57:50 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:20.074 09:57:50 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:20.074 09:57:50 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:20.074 09:57:50 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:20.074 09:57:50 -- setup/common.sh@18 -- # local node= 00:05:20.074 09:57:50 -- setup/common.sh@19 -- # local var val 00:05:20.074 09:57:50 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.074 09:57:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.074 09:57:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.074 09:57:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.074 09:57:50 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.074 09:57:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.074 09:57:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6573032 kB' 'MemAvailable: 9474864 kB' 'Buffers: 2436 kB' 'Cached: 3103164 kB' 'SwapCached: 0 kB' 'Active: 490252 kB' 'Inactive: 2732908 kB' 'Active(anon): 128028 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119132 kB' 'Mapped: 48648 kB' 'Shmem: 10468 kB' 'KReclaimable: 87276 kB' 'Slab: 167080 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79804 kB' 'KernelStack: 6480 kB' 'PageTables: 4168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 6109184 kB' 'DirectMap1G: 8388608 kB' 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.074 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.074 09:57:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.075 09:57:50 -- setup/common.sh@33 -- # echo 1024 00:05:20.075 09:57:50 -- setup/common.sh@33 -- # return 0 00:05:20.075 09:57:50 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:20.075 09:57:50 -- setup/hugepages.sh@112 -- # get_nodes 00:05:20.075 09:57:50 -- setup/hugepages.sh@27 -- # local node 00:05:20.075 09:57:50 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:20.075 09:57:50 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:20.075 09:57:50 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:20.075 09:57:50 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:20.075 09:57:50 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:20.075 09:57:50 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:20.075 09:57:50 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:20.075 09:57:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.075 09:57:50 -- setup/common.sh@18 -- # local node=0 00:05:20.075 09:57:50 -- setup/common.sh@19 -- # local var val 00:05:20.075 09:57:50 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.075 09:57:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.075 09:57:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:20.075 09:57:50 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:20.075 09:57:50 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.075 09:57:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.075 09:57:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6573032 kB' 'MemUsed: 5668944 kB' 'SwapCached: 0 kB' 'Active: 489904 kB' 'Inactive: 2732908 kB' 'Active(anon): 127680 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3105600 kB' 'Mapped: 48648 kB' 'AnonPages: 118796 kB' 'Shmem: 10468 kB' 'KernelStack: 6496 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87276 kB' 'Slab: 167080 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79804 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.075 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.075 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.076 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.076 09:57:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.076 09:57:50 -- setup/common.sh@33 -- # echo 0 00:05:20.076 09:57:50 -- setup/common.sh@33 -- # return 0 00:05:20.076 09:57:50 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:20.076 09:57:50 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:20.076 09:57:50 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:20.076 09:57:50 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:20.076 node0=1024 expecting 1024 00:05:20.076 09:57:50 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:20.076 09:57:50 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:20.076 09:57:50 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:20.076 09:57:50 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:20.076 09:57:50 -- setup/hugepages.sh@202 -- # setup output 00:05:20.076 09:57:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.076 09:57:50 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:20.336 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:20.597 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.597 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.597 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:20.598 09:57:50 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:20.598 09:57:50 -- setup/hugepages.sh@89 -- # local node 00:05:20.598 09:57:50 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:20.598 09:57:50 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:20.598 09:57:50 -- setup/hugepages.sh@92 -- # local surp 00:05:20.598 09:57:50 -- setup/hugepages.sh@93 -- # local resv 00:05:20.598 09:57:50 -- setup/hugepages.sh@94 -- # local anon 00:05:20.598 09:57:50 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:20.598 09:57:50 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:20.598 09:57:50 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:20.598 09:57:50 -- setup/common.sh@18 -- # local node= 00:05:20.598 09:57:50 -- setup/common.sh@19 -- # local var val 00:05:20.598 09:57:50 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.598 09:57:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.598 09:57:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.598 09:57:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.598 09:57:50 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.598 09:57:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.598 09:57:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6570556 kB' 'MemAvailable: 9472388 kB' 'Buffers: 2436 kB' 'Cached: 3103164 kB' 'SwapCached: 0 kB' 'Active: 490724 kB' 'Inactive: 2732908 kB' 'Active(anon): 128500 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119616 kB' 'Mapped: 48736 kB' 'Shmem: 10468 kB' 'KReclaimable: 87276 kB' 'Slab: 167104 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79828 kB' 'KernelStack: 6536 kB' 'PageTables: 4204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 6109184 kB' 'DirectMap1G: 8388608 kB' 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.598 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.598 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.599 09:57:50 -- setup/common.sh@33 -- # echo 0 00:05:20.599 09:57:50 -- setup/common.sh@33 -- # return 0 00:05:20.599 09:57:50 -- setup/hugepages.sh@97 -- # anon=0 00:05:20.599 09:57:50 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:20.599 09:57:50 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.599 09:57:50 -- setup/common.sh@18 -- # local node= 00:05:20.599 09:57:50 -- setup/common.sh@19 -- # local var val 00:05:20.599 09:57:50 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.599 09:57:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.599 09:57:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.599 09:57:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.599 09:57:50 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.599 09:57:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.599 09:57:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6570556 kB' 'MemAvailable: 9472388 kB' 'Buffers: 2436 kB' 'Cached: 3103164 kB' 'SwapCached: 0 kB' 'Active: 490348 kB' 'Inactive: 2732908 kB' 'Active(anon): 128124 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119192 kB' 'Mapped: 48788 kB' 'Shmem: 10468 kB' 'KReclaimable: 87276 kB' 'Slab: 167104 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79828 kB' 'KernelStack: 6488 kB' 'PageTables: 4072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 6109184 kB' 'DirectMap1G: 8388608 kB' 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.599 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.599 09:57:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.600 09:57:50 -- setup/common.sh@33 -- # echo 0 00:05:20.600 09:57:50 -- setup/common.sh@33 -- # return 0 00:05:20.600 09:57:50 -- setup/hugepages.sh@99 -- # surp=0 00:05:20.600 09:57:50 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:20.600 09:57:50 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:20.600 09:57:50 -- setup/common.sh@18 -- # local node= 00:05:20.600 09:57:50 -- setup/common.sh@19 -- # local var val 00:05:20.600 09:57:50 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.600 09:57:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.600 09:57:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.600 09:57:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.600 09:57:50 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.600 09:57:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6570556 kB' 'MemAvailable: 9472388 kB' 'Buffers: 2436 kB' 'Cached: 3103164 kB' 'SwapCached: 0 kB' 'Active: 490176 kB' 'Inactive: 2732908 kB' 'Active(anon): 127952 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119056 kB' 'Mapped: 48648 kB' 'Shmem: 10468 kB' 'KReclaimable: 87276 kB' 'Slab: 167100 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79824 kB' 'KernelStack: 6480 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 6109184 kB' 'DirectMap1G: 8388608 kB' 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.600 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.600 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.601 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.601 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.602 09:57:50 -- setup/common.sh@33 -- # echo 0 00:05:20.602 09:57:50 -- setup/common.sh@33 -- # return 0 00:05:20.602 09:57:50 -- setup/hugepages.sh@100 -- # resv=0 00:05:20.602 nr_hugepages=1024 00:05:20.602 09:57:50 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:20.602 resv_hugepages=0 00:05:20.602 09:57:50 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:20.602 surplus_hugepages=0 00:05:20.602 09:57:50 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:20.602 anon_hugepages=0 00:05:20.602 09:57:50 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:20.602 09:57:50 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:20.602 09:57:50 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:20.602 09:57:50 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:20.602 09:57:50 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:20.602 09:57:50 -- setup/common.sh@18 -- # local node= 00:05:20.602 09:57:50 -- setup/common.sh@19 -- # local var val 00:05:20.602 09:57:50 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.602 09:57:50 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.602 09:57:50 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.602 09:57:50 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.602 09:57:50 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.602 09:57:50 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.602 09:57:50 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6570556 kB' 'MemAvailable: 9472388 kB' 'Buffers: 2436 kB' 'Cached: 3103164 kB' 'SwapCached: 0 kB' 'Active: 490144 kB' 'Inactive: 2732908 kB' 'Active(anon): 127920 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 119032 kB' 'Mapped: 48648 kB' 'Shmem: 10468 kB' 'KReclaimable: 87276 kB' 'Slab: 167100 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79824 kB' 'KernelStack: 6480 kB' 'PageTables: 4160 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 346104 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6144 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 182124 kB' 'DirectMap2M: 6109184 kB' 'DirectMap1G: 8388608 kB' 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.602 09:57:50 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.602 09:57:50 -- setup/common.sh@32 -- # continue 00:05:20.602 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.602 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.602 09:57:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.602 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.602 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.602 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.602 09:57:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.602 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.602 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.603 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.603 09:57:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.603 09:57:51 -- setup/common.sh@33 -- # echo 1024 00:05:20.603 09:57:51 -- setup/common.sh@33 -- # return 0 00:05:20.603 09:57:51 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:20.603 09:57:51 -- setup/hugepages.sh@112 -- # get_nodes 00:05:20.603 09:57:51 -- setup/hugepages.sh@27 -- # local node 00:05:20.603 09:57:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:20.603 09:57:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:20.603 09:57:51 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:20.603 09:57:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:20.604 09:57:51 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:20.604 09:57:51 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:20.604 09:57:51 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:20.604 09:57:51 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.604 09:57:51 -- setup/common.sh@18 -- # local node=0 00:05:20.604 09:57:51 -- setup/common.sh@19 -- # local var val 00:05:20.604 09:57:51 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.604 09:57:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.604 09:57:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:20.604 09:57:51 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:20.604 09:57:51 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.604 09:57:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.604 09:57:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241976 kB' 'MemFree: 6570556 kB' 'MemUsed: 5671420 kB' 'SwapCached: 0 kB' 'Active: 490140 kB' 'Inactive: 2732908 kB' 'Active(anon): 127916 kB' 'Inactive(anon): 0 kB' 'Active(file): 362224 kB' 'Inactive(file): 2732908 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 3105600 kB' 'Mapped: 48648 kB' 'AnonPages: 119072 kB' 'Shmem: 10468 kB' 'KernelStack: 6496 kB' 'PageTables: 4208 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 87276 kB' 'Slab: 167100 kB' 'SReclaimable: 87276 kB' 'SUnreclaim: 79824 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.604 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.604 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.605 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.605 09:57:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.605 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.605 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.605 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.605 09:57:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.605 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.605 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.605 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.605 09:57:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.605 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.605 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.605 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.605 09:57:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.605 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.605 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.605 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.605 09:57:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.605 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.605 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.605 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.605 09:57:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.605 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.605 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.605 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.605 09:57:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.605 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.605 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.605 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.605 09:57:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.605 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.605 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.605 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.605 09:57:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.605 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.605 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.605 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.605 09:57:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.605 09:57:51 -- setup/common.sh@32 -- # continue 00:05:20.605 09:57:51 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.605 09:57:51 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.605 09:57:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.605 09:57:51 -- setup/common.sh@33 -- # echo 0 00:05:20.605 09:57:51 -- setup/common.sh@33 -- # return 0 00:05:20.605 09:57:51 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:20.605 09:57:51 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:20.605 09:57:51 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:20.605 09:57:51 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:20.605 node0=1024 expecting 1024 00:05:20.605 09:57:51 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:20.605 09:57:51 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:20.605 00:05:20.605 real 0m1.039s 00:05:20.605 user 0m0.510s 00:05:20.605 sys 0m0.595s 00:05:20.605 09:57:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.605 09:57:51 -- common/autotest_common.sh@10 -- # set +x 00:05:20.605 ************************************ 00:05:20.605 END TEST no_shrink_alloc 00:05:20.605 ************************************ 00:05:20.605 09:57:51 -- setup/hugepages.sh@217 -- # clear_hp 00:05:20.605 09:57:51 -- setup/hugepages.sh@37 -- # local node hp 00:05:20.605 09:57:51 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:20.605 09:57:51 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:20.605 09:57:51 -- setup/hugepages.sh@41 -- # echo 0 00:05:20.605 09:57:51 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:20.605 09:57:51 -- setup/hugepages.sh@41 -- # echo 0 00:05:20.605 09:57:51 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:20.605 09:57:51 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:20.605 00:05:20.605 real 0m4.572s 00:05:20.605 user 0m2.199s 00:05:20.605 sys 0m2.506s 00:05:20.605 ************************************ 00:05:20.605 END TEST hugepages 00:05:20.605 ************************************ 00:05:20.605 09:57:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.605 09:57:51 -- common/autotest_common.sh@10 -- # set +x 00:05:20.864 09:57:51 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:20.864 09:57:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:20.864 09:57:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:20.865 09:57:51 -- common/autotest_common.sh@10 -- # set +x 00:05:20.865 ************************************ 00:05:20.865 START TEST driver 00:05:20.865 ************************************ 00:05:20.865 09:57:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:20.865 * Looking for test storage... 00:05:20.865 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:20.865 09:57:51 -- setup/driver.sh@68 -- # setup reset 00:05:20.865 09:57:51 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:20.865 09:57:51 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:21.433 09:57:51 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:21.433 09:57:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:21.433 09:57:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:21.433 09:57:51 -- common/autotest_common.sh@10 -- # set +x 00:05:21.433 ************************************ 00:05:21.433 START TEST guess_driver 00:05:21.433 ************************************ 00:05:21.433 09:57:51 -- common/autotest_common.sh@1104 -- # guess_driver 00:05:21.433 09:57:51 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:21.433 09:57:51 -- setup/driver.sh@47 -- # local fail=0 00:05:21.433 09:57:51 -- setup/driver.sh@49 -- # pick_driver 00:05:21.433 09:57:51 -- setup/driver.sh@36 -- # vfio 00:05:21.433 09:57:51 -- setup/driver.sh@21 -- # local iommu_grups 00:05:21.433 09:57:51 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:21.433 09:57:51 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:21.433 09:57:51 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:21.433 09:57:51 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:21.433 09:57:51 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:21.433 09:57:51 -- setup/driver.sh@32 -- # return 1 00:05:21.433 09:57:51 -- setup/driver.sh@38 -- # uio 00:05:21.433 09:57:51 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:21.433 09:57:51 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:21.433 09:57:51 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:21.433 09:57:51 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:21.433 09:57:51 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:21.433 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:21.433 09:57:51 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:21.433 09:57:51 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:21.433 09:57:51 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:21.433 Looking for driver=uio_pci_generic 00:05:21.433 09:57:51 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:21.433 09:57:51 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:21.433 09:57:51 -- setup/driver.sh@45 -- # setup output config 00:05:21.433 09:57:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.433 09:57:51 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:22.000 09:57:52 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:22.000 09:57:52 -- setup/driver.sh@58 -- # continue 00:05:22.000 09:57:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.000 09:57:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.000 09:57:52 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:22.000 09:57:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.259 09:57:52 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:22.259 09:57:52 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:22.259 09:57:52 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:22.259 09:57:52 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:22.259 09:57:52 -- setup/driver.sh@65 -- # setup reset 00:05:22.259 09:57:52 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:22.259 09:57:52 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:22.826 00:05:22.826 real 0m1.425s 00:05:22.826 user 0m0.563s 00:05:22.826 sys 0m0.880s 00:05:22.826 09:57:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.826 ************************************ 00:05:22.826 END TEST guess_driver 00:05:22.826 09:57:53 -- common/autotest_common.sh@10 -- # set +x 00:05:22.826 ************************************ 00:05:22.826 00:05:22.826 real 0m2.113s 00:05:22.826 user 0m0.790s 00:05:22.826 sys 0m1.384s 00:05:22.826 09:57:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.826 09:57:53 -- common/autotest_common.sh@10 -- # set +x 00:05:22.826 ************************************ 00:05:22.826 END TEST driver 00:05:22.826 ************************************ 00:05:22.826 09:57:53 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:22.826 09:57:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:22.826 09:57:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.826 09:57:53 -- common/autotest_common.sh@10 -- # set +x 00:05:22.826 ************************************ 00:05:22.826 START TEST devices 00:05:22.826 ************************************ 00:05:22.826 09:57:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:23.085 * Looking for test storage... 00:05:23.085 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:23.085 09:57:53 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:23.085 09:57:53 -- setup/devices.sh@192 -- # setup reset 00:05:23.085 09:57:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:23.085 09:57:53 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:23.653 09:57:54 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:23.653 09:57:54 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:23.653 09:57:54 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:23.653 09:57:54 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:23.653 09:57:54 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:23.653 09:57:54 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:23.653 09:57:54 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:23.653 09:57:54 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:23.653 09:57:54 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:23.653 09:57:54 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:23.653 09:57:54 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:05:23.653 09:57:54 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:05:23.653 09:57:54 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:23.653 09:57:54 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:23.653 09:57:54 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:23.653 09:57:54 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:05:23.653 09:57:54 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:05:23.653 09:57:54 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:23.653 09:57:54 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:23.653 09:57:54 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:23.653 09:57:54 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:05:23.653 09:57:54 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:05:23.653 09:57:54 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:23.653 09:57:54 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:23.653 09:57:54 -- setup/devices.sh@196 -- # blocks=() 00:05:23.653 09:57:54 -- setup/devices.sh@196 -- # declare -a blocks 00:05:23.653 09:57:54 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:23.653 09:57:54 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:23.653 09:57:54 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:23.653 09:57:54 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:23.653 09:57:54 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:23.653 09:57:54 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:23.653 09:57:54 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:23.653 09:57:54 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:23.653 09:57:54 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:23.653 09:57:54 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:23.653 09:57:54 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:23.912 No valid GPT data, bailing 00:05:23.912 09:57:54 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:23.912 09:57:54 -- scripts/common.sh@393 -- # pt= 00:05:23.912 09:57:54 -- scripts/common.sh@394 -- # return 1 00:05:23.912 09:57:54 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:23.912 09:57:54 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:23.912 09:57:54 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:23.912 09:57:54 -- setup/common.sh@80 -- # echo 5368709120 00:05:23.912 09:57:54 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:23.912 09:57:54 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:23.912 09:57:54 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:23.912 09:57:54 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:23.912 09:57:54 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:23.912 09:57:54 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:23.912 09:57:54 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:23.912 09:57:54 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:23.912 09:57:54 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:23.912 09:57:54 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:23.912 09:57:54 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:23.912 No valid GPT data, bailing 00:05:23.912 09:57:54 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:23.912 09:57:54 -- scripts/common.sh@393 -- # pt= 00:05:23.912 09:57:54 -- scripts/common.sh@394 -- # return 1 00:05:23.912 09:57:54 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:23.912 09:57:54 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:23.912 09:57:54 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:23.912 09:57:54 -- setup/common.sh@80 -- # echo 4294967296 00:05:23.912 09:57:54 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:23.912 09:57:54 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:23.912 09:57:54 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:23.912 09:57:54 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:23.912 09:57:54 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:23.912 09:57:54 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:23.912 09:57:54 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:23.912 09:57:54 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:23.912 09:57:54 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:23.912 09:57:54 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:23.912 09:57:54 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:23.912 No valid GPT data, bailing 00:05:23.912 09:57:54 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:23.912 09:57:54 -- scripts/common.sh@393 -- # pt= 00:05:23.912 09:57:54 -- scripts/common.sh@394 -- # return 1 00:05:23.912 09:57:54 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:23.912 09:57:54 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:23.912 09:57:54 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:23.912 09:57:54 -- setup/common.sh@80 -- # echo 4294967296 00:05:23.912 09:57:54 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:23.912 09:57:54 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:23.912 09:57:54 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:23.912 09:57:54 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:23.912 09:57:54 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:23.912 09:57:54 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:23.912 09:57:54 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:23.912 09:57:54 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:23.912 09:57:54 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:23.912 09:57:54 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:23.912 09:57:54 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:24.171 No valid GPT data, bailing 00:05:24.171 09:57:54 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:24.171 09:57:54 -- scripts/common.sh@393 -- # pt= 00:05:24.171 09:57:54 -- scripts/common.sh@394 -- # return 1 00:05:24.171 09:57:54 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:24.171 09:57:54 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:24.171 09:57:54 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:24.171 09:57:54 -- setup/common.sh@80 -- # echo 4294967296 00:05:24.171 09:57:54 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:24.171 09:57:54 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:24.171 09:57:54 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:24.171 09:57:54 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:24.171 09:57:54 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:24.171 09:57:54 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:24.171 09:57:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.171 09:57:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.171 09:57:54 -- common/autotest_common.sh@10 -- # set +x 00:05:24.171 ************************************ 00:05:24.171 START TEST nvme_mount 00:05:24.171 ************************************ 00:05:24.171 09:57:54 -- common/autotest_common.sh@1104 -- # nvme_mount 00:05:24.171 09:57:54 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:24.171 09:57:54 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:24.171 09:57:54 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:24.171 09:57:54 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:24.171 09:57:54 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:24.171 09:57:54 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:24.171 09:57:54 -- setup/common.sh@40 -- # local part_no=1 00:05:24.171 09:57:54 -- setup/common.sh@41 -- # local size=1073741824 00:05:24.171 09:57:54 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:24.172 09:57:54 -- setup/common.sh@44 -- # parts=() 00:05:24.172 09:57:54 -- setup/common.sh@44 -- # local parts 00:05:24.172 09:57:54 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:24.172 09:57:54 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:24.172 09:57:54 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:24.172 09:57:54 -- setup/common.sh@46 -- # (( part++ )) 00:05:24.172 09:57:54 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:24.172 09:57:54 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:24.172 09:57:54 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:24.172 09:57:54 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:25.107 Creating new GPT entries in memory. 00:05:25.107 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:25.107 other utilities. 00:05:25.107 09:57:55 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:25.107 09:57:55 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:25.107 09:57:55 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:25.107 09:57:55 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:25.107 09:57:55 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:26.043 Creating new GPT entries in memory. 00:05:26.043 The operation has completed successfully. 00:05:26.043 09:57:56 -- setup/common.sh@57 -- # (( part++ )) 00:05:26.043 09:57:56 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:26.043 09:57:56 -- setup/common.sh@62 -- # wait 65813 00:05:26.043 09:57:56 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:26.043 09:57:56 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:26.043 09:57:56 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:26.043 09:57:56 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:26.043 09:57:56 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:26.302 09:57:56 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:26.302 09:57:56 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:26.302 09:57:56 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:26.302 09:57:56 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:26.302 09:57:56 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:26.302 09:57:56 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:26.302 09:57:56 -- setup/devices.sh@53 -- # local found=0 00:05:26.302 09:57:56 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:26.302 09:57:56 -- setup/devices.sh@56 -- # : 00:05:26.302 09:57:56 -- setup/devices.sh@59 -- # local pci status 00:05:26.302 09:57:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.302 09:57:56 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:26.302 09:57:56 -- setup/devices.sh@47 -- # setup output config 00:05:26.302 09:57:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.302 09:57:56 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:26.302 09:57:56 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:26.302 09:57:56 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:26.302 09:57:56 -- setup/devices.sh@63 -- # found=1 00:05:26.302 09:57:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.302 09:57:56 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:26.302 09:57:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.869 09:57:57 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:26.869 09:57:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.869 09:57:57 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:26.869 09:57:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.869 09:57:57 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:26.869 09:57:57 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:26.869 09:57:57 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:26.870 09:57:57 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:26.870 09:57:57 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:26.870 09:57:57 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:26.870 09:57:57 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:26.870 09:57:57 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:26.870 09:57:57 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:26.870 09:57:57 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:26.870 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:26.870 09:57:57 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:26.870 09:57:57 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:27.129 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:27.129 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:27.129 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:27.129 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:27.129 09:57:57 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:27.129 09:57:57 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:27.129 09:57:57 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:27.129 09:57:57 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:27.129 09:57:57 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:27.129 09:57:57 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:27.129 09:57:57 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:27.129 09:57:57 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:27.129 09:57:57 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:27.129 09:57:57 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:27.129 09:57:57 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:27.129 09:57:57 -- setup/devices.sh@53 -- # local found=0 00:05:27.129 09:57:57 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:27.129 09:57:57 -- setup/devices.sh@56 -- # : 00:05:27.129 09:57:57 -- setup/devices.sh@59 -- # local pci status 00:05:27.129 09:57:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.129 09:57:57 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:27.129 09:57:57 -- setup/devices.sh@47 -- # setup output config 00:05:27.129 09:57:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:27.129 09:57:57 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:27.388 09:57:57 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:27.388 09:57:57 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:27.388 09:57:57 -- setup/devices.sh@63 -- # found=1 00:05:27.388 09:57:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.388 09:57:57 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:27.388 09:57:57 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.646 09:57:58 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:27.646 09:57:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.905 09:57:58 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:27.905 09:57:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.905 09:57:58 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:27.905 09:57:58 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:27.905 09:57:58 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:27.905 09:57:58 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:27.905 09:57:58 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:27.905 09:57:58 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:27.905 09:57:58 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:27.905 09:57:58 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:27.905 09:57:58 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:27.905 09:57:58 -- setup/devices.sh@50 -- # local mount_point= 00:05:27.905 09:57:58 -- setup/devices.sh@51 -- # local test_file= 00:05:27.905 09:57:58 -- setup/devices.sh@53 -- # local found=0 00:05:27.905 09:57:58 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:27.905 09:57:58 -- setup/devices.sh@59 -- # local pci status 00:05:27.905 09:57:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.905 09:57:58 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:27.905 09:57:58 -- setup/devices.sh@47 -- # setup output config 00:05:27.905 09:57:58 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:27.905 09:57:58 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:28.163 09:57:58 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:28.163 09:57:58 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:28.163 09:57:58 -- setup/devices.sh@63 -- # found=1 00:05:28.163 09:57:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.163 09:57:58 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:28.163 09:57:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.422 09:57:58 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:28.422 09:57:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.422 09:57:58 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:28.422 09:57:58 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.680 09:57:59 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:28.680 09:57:59 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:28.681 09:57:59 -- setup/devices.sh@68 -- # return 0 00:05:28.681 09:57:59 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:28.681 09:57:59 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:28.681 09:57:59 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:28.681 09:57:59 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:28.681 09:57:59 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:28.681 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:28.681 00:05:28.681 real 0m4.531s 00:05:28.681 user 0m1.050s 00:05:28.681 sys 0m1.185s 00:05:28.681 09:57:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.681 ************************************ 00:05:28.681 09:57:59 -- common/autotest_common.sh@10 -- # set +x 00:05:28.681 END TEST nvme_mount 00:05:28.681 ************************************ 00:05:28.681 09:57:59 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:28.681 09:57:59 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:28.681 09:57:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:28.681 09:57:59 -- common/autotest_common.sh@10 -- # set +x 00:05:28.681 ************************************ 00:05:28.681 START TEST dm_mount 00:05:28.681 ************************************ 00:05:28.681 09:57:59 -- common/autotest_common.sh@1104 -- # dm_mount 00:05:28.681 09:57:59 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:28.681 09:57:59 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:28.681 09:57:59 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:28.681 09:57:59 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:28.681 09:57:59 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:28.681 09:57:59 -- setup/common.sh@40 -- # local part_no=2 00:05:28.681 09:57:59 -- setup/common.sh@41 -- # local size=1073741824 00:05:28.681 09:57:59 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:28.681 09:57:59 -- setup/common.sh@44 -- # parts=() 00:05:28.681 09:57:59 -- setup/common.sh@44 -- # local parts 00:05:28.681 09:57:59 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:28.681 09:57:59 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:28.681 09:57:59 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:28.681 09:57:59 -- setup/common.sh@46 -- # (( part++ )) 00:05:28.681 09:57:59 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:28.681 09:57:59 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:28.681 09:57:59 -- setup/common.sh@46 -- # (( part++ )) 00:05:28.681 09:57:59 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:28.681 09:57:59 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:28.681 09:57:59 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:28.681 09:57:59 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:29.615 Creating new GPT entries in memory. 00:05:29.615 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:29.615 other utilities. 00:05:29.615 09:58:00 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:29.615 09:58:00 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:29.615 09:58:00 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:29.615 09:58:00 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:29.615 09:58:00 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:30.989 Creating new GPT entries in memory. 00:05:30.989 The operation has completed successfully. 00:05:30.989 09:58:01 -- setup/common.sh@57 -- # (( part++ )) 00:05:30.989 09:58:01 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:30.989 09:58:01 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:30.989 09:58:01 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:30.989 09:58:01 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:31.925 The operation has completed successfully. 00:05:31.925 09:58:02 -- setup/common.sh@57 -- # (( part++ )) 00:05:31.925 09:58:02 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:31.925 09:58:02 -- setup/common.sh@62 -- # wait 66272 00:05:31.925 09:58:02 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:31.925 09:58:02 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:31.925 09:58:02 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:31.925 09:58:02 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:31.925 09:58:02 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:31.925 09:58:02 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:31.925 09:58:02 -- setup/devices.sh@161 -- # break 00:05:31.925 09:58:02 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:31.925 09:58:02 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:31.925 09:58:02 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:31.925 09:58:02 -- setup/devices.sh@166 -- # dm=dm-0 00:05:31.925 09:58:02 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:31.925 09:58:02 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:31.925 09:58:02 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:31.925 09:58:02 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:31.925 09:58:02 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:31.925 09:58:02 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:31.925 09:58:02 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:31.925 09:58:02 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:31.925 09:58:02 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:31.925 09:58:02 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:31.925 09:58:02 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:31.925 09:58:02 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:31.925 09:58:02 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:31.925 09:58:02 -- setup/devices.sh@53 -- # local found=0 00:05:31.925 09:58:02 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:31.925 09:58:02 -- setup/devices.sh@56 -- # : 00:05:31.925 09:58:02 -- setup/devices.sh@59 -- # local pci status 00:05:31.925 09:58:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.925 09:58:02 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:31.925 09:58:02 -- setup/devices.sh@47 -- # setup output config 00:05:31.925 09:58:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:31.925 09:58:02 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:31.925 09:58:02 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:31.925 09:58:02 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:31.925 09:58:02 -- setup/devices.sh@63 -- # found=1 00:05:31.925 09:58:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:31.925 09:58:02 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:31.925 09:58:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.183 09:58:02 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:32.183 09:58:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.442 09:58:02 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:32.442 09:58:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.442 09:58:02 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:32.442 09:58:02 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:32.442 09:58:02 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:32.442 09:58:02 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:32.442 09:58:02 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:32.442 09:58:02 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:32.442 09:58:02 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:32.442 09:58:02 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:32.442 09:58:02 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:32.442 09:58:02 -- setup/devices.sh@50 -- # local mount_point= 00:05:32.442 09:58:02 -- setup/devices.sh@51 -- # local test_file= 00:05:32.442 09:58:02 -- setup/devices.sh@53 -- # local found=0 00:05:32.442 09:58:02 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:32.442 09:58:02 -- setup/devices.sh@59 -- # local pci status 00:05:32.442 09:58:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.442 09:58:02 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:32.442 09:58:02 -- setup/devices.sh@47 -- # setup output config 00:05:32.442 09:58:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:32.442 09:58:02 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:32.701 09:58:03 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:32.701 09:58:03 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:32.701 09:58:03 -- setup/devices.sh@63 -- # found=1 00:05:32.701 09:58:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.701 09:58:03 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:32.701 09:58:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.960 09:58:03 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:32.960 09:58:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:32.960 09:58:03 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:32.960 09:58:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.218 09:58:03 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:33.218 09:58:03 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:33.218 09:58:03 -- setup/devices.sh@68 -- # return 0 00:05:33.218 09:58:03 -- setup/devices.sh@187 -- # cleanup_dm 00:05:33.218 09:58:03 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:33.218 09:58:03 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:33.218 09:58:03 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:33.218 09:58:03 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:33.218 09:58:03 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:33.218 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:33.218 09:58:03 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:33.218 09:58:03 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:33.218 00:05:33.218 real 0m4.552s 00:05:33.218 user 0m0.677s 00:05:33.218 sys 0m0.813s 00:05:33.218 09:58:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.218 ************************************ 00:05:33.218 END TEST dm_mount 00:05:33.218 09:58:03 -- common/autotest_common.sh@10 -- # set +x 00:05:33.218 ************************************ 00:05:33.218 09:58:03 -- setup/devices.sh@1 -- # cleanup 00:05:33.218 09:58:03 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:33.218 09:58:03 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:33.218 09:58:03 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:33.218 09:58:03 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:33.218 09:58:03 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:33.218 09:58:03 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:33.493 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:33.493 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:33.493 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:33.493 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:33.493 09:58:03 -- setup/devices.sh@12 -- # cleanup_dm 00:05:33.493 09:58:03 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:33.493 09:58:03 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:33.493 09:58:03 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:33.493 09:58:03 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:33.493 09:58:03 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:33.493 09:58:03 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:33.493 00:05:33.493 real 0m10.658s 00:05:33.493 user 0m2.400s 00:05:33.493 sys 0m2.602s 00:05:33.493 09:58:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.493 09:58:03 -- common/autotest_common.sh@10 -- # set +x 00:05:33.493 ************************************ 00:05:33.493 END TEST devices 00:05:33.493 ************************************ 00:05:33.493 ************************************ 00:05:33.493 END TEST setup.sh 00:05:33.493 ************************************ 00:05:33.493 00:05:33.493 real 0m21.888s 00:05:33.493 user 0m7.320s 00:05:33.493 sys 0m9.008s 00:05:33.493 09:58:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:33.493 09:58:04 -- common/autotest_common.sh@10 -- # set +x 00:05:33.758 09:58:04 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:33.758 Hugepages 00:05:33.758 node hugesize free / total 00:05:33.758 node0 1048576kB 0 / 0 00:05:33.758 node0 2048kB 2048 / 2048 00:05:33.758 00:05:33.758 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:33.758 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:34.017 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:34.017 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:34.017 09:58:04 -- spdk/autotest.sh@141 -- # uname -s 00:05:34.017 09:58:04 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:34.017 09:58:04 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:34.017 09:58:04 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:34.584 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:34.843 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:34.843 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:34.843 09:58:05 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:35.778 09:58:06 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:35.778 09:58:06 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:35.778 09:58:06 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:35.778 09:58:06 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:35.778 09:58:06 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:35.778 09:58:06 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:35.778 09:58:06 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:35.778 09:58:06 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:35.778 09:58:06 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:36.036 09:58:06 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:36.036 09:58:06 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:36.036 09:58:06 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:36.295 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:36.295 Waiting for block devices as requested 00:05:36.295 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:36.295 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:36.554 09:58:06 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:36.554 09:58:06 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:36.554 09:58:06 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:36.554 09:58:06 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:05:36.554 09:58:06 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:36.554 09:58:06 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:36.554 09:58:06 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:36.554 09:58:06 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:36.554 09:58:06 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:36.554 09:58:06 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:36.554 09:58:06 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:36.554 09:58:06 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:36.554 09:58:06 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:36.554 09:58:06 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:36.554 09:58:06 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:36.554 09:58:06 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:36.554 09:58:06 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:36.554 09:58:06 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:36.554 09:58:06 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:36.554 09:58:06 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:36.554 09:58:06 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:36.554 09:58:06 -- common/autotest_common.sh@1542 -- # continue 00:05:36.554 09:58:06 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:36.554 09:58:06 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:36.554 09:58:06 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:36.554 09:58:06 -- common/autotest_common.sh@1487 -- # grep 0000:00:07.0/nvme/nvme 00:05:36.554 09:58:06 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:36.554 09:58:06 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:05:36.554 09:58:06 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:36.554 09:58:06 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:36.554 09:58:06 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme1 00:05:36.554 09:58:06 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme1 ]] 00:05:36.554 09:58:06 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme1 00:05:36.554 09:58:06 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:36.554 09:58:06 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:36.554 09:58:06 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:36.554 09:58:06 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:36.554 09:58:06 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:36.554 09:58:06 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme1 00:05:36.554 09:58:06 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:36.554 09:58:06 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:36.554 09:58:06 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:36.554 09:58:06 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:36.554 09:58:06 -- common/autotest_common.sh@1542 -- # continue 00:05:36.554 09:58:06 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:36.554 09:58:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:36.554 09:58:06 -- common/autotest_common.sh@10 -- # set +x 00:05:36.554 09:58:06 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:36.554 09:58:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:36.554 09:58:06 -- common/autotest_common.sh@10 -- # set +x 00:05:36.554 09:58:07 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:37.121 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:37.379 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:37.379 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:37.379 09:58:07 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:37.379 09:58:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:37.379 09:58:07 -- common/autotest_common.sh@10 -- # set +x 00:05:37.379 09:58:07 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:37.379 09:58:07 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:37.379 09:58:07 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:37.379 09:58:07 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:37.379 09:58:07 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:37.379 09:58:07 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:37.379 09:58:07 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:37.379 09:58:07 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:37.379 09:58:07 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:37.379 09:58:07 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:37.379 09:58:07 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:37.638 09:58:07 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:37.638 09:58:07 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:37.638 09:58:07 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:37.638 09:58:07 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:37.638 09:58:07 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:37.638 09:58:07 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:37.638 09:58:07 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:37.638 09:58:07 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:37.638 09:58:07 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:37.638 09:58:07 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:37.638 09:58:07 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:05:37.638 09:58:07 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:37.638 09:58:07 -- common/autotest_common.sh@1578 -- # return 0 00:05:37.638 09:58:07 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:37.638 09:58:07 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:37.638 09:58:07 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:37.638 09:58:07 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:37.638 09:58:07 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:37.638 09:58:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:37.638 09:58:07 -- common/autotest_common.sh@10 -- # set +x 00:05:37.638 09:58:07 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:37.638 09:58:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:37.638 09:58:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:37.638 09:58:07 -- common/autotest_common.sh@10 -- # set +x 00:05:37.638 ************************************ 00:05:37.638 START TEST env 00:05:37.638 ************************************ 00:05:37.638 09:58:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:37.638 * Looking for test storage... 00:05:37.638 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:37.638 09:58:08 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:37.638 09:58:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:37.638 09:58:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:37.638 09:58:08 -- common/autotest_common.sh@10 -- # set +x 00:05:37.638 ************************************ 00:05:37.638 START TEST env_memory 00:05:37.638 ************************************ 00:05:37.638 09:58:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:37.638 00:05:37.638 00:05:37.638 CUnit - A unit testing framework for C - Version 2.1-3 00:05:37.638 http://cunit.sourceforge.net/ 00:05:37.638 00:05:37.638 00:05:37.638 Suite: memory 00:05:37.638 Test: alloc and free memory map ...[2024-07-14 09:58:08.138314] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:37.638 passed 00:05:37.638 Test: mem map translation ...[2024-07-14 09:58:08.169243] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:37.638 [2024-07-14 09:58:08.169299] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:37.638 [2024-07-14 09:58:08.169361] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:37.638 [2024-07-14 09:58:08.169372] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:37.898 passed 00:05:37.898 Test: mem map registration ...[2024-07-14 09:58:08.233011] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:37.898 [2024-07-14 09:58:08.233068] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:37.898 passed 00:05:37.898 Test: mem map adjacent registrations ...passed 00:05:37.898 00:05:37.898 Run Summary: Type Total Ran Passed Failed Inactive 00:05:37.898 suites 1 1 n/a 0 0 00:05:37.898 tests 4 4 4 0 0 00:05:37.898 asserts 152 152 152 0 n/a 00:05:37.898 00:05:37.898 Elapsed time = 0.223 seconds 00:05:37.898 00:05:37.898 real 0m0.242s 00:05:37.898 user 0m0.229s 00:05:37.898 sys 0m0.010s 00:05:37.898 09:58:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.898 09:58:08 -- common/autotest_common.sh@10 -- # set +x 00:05:37.898 ************************************ 00:05:37.898 END TEST env_memory 00:05:37.898 ************************************ 00:05:37.898 09:58:08 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:37.898 09:58:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:37.898 09:58:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:37.898 09:58:08 -- common/autotest_common.sh@10 -- # set +x 00:05:37.898 ************************************ 00:05:37.898 START TEST env_vtophys 00:05:37.898 ************************************ 00:05:37.898 09:58:08 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:37.898 EAL: lib.eal log level changed from notice to debug 00:05:37.898 EAL: Detected lcore 0 as core 0 on socket 0 00:05:37.898 EAL: Detected lcore 1 as core 0 on socket 0 00:05:37.898 EAL: Detected lcore 2 as core 0 on socket 0 00:05:37.898 EAL: Detected lcore 3 as core 0 on socket 0 00:05:37.898 EAL: Detected lcore 4 as core 0 on socket 0 00:05:37.898 EAL: Detected lcore 5 as core 0 on socket 0 00:05:37.898 EAL: Detected lcore 6 as core 0 on socket 0 00:05:37.898 EAL: Detected lcore 7 as core 0 on socket 0 00:05:37.898 EAL: Detected lcore 8 as core 0 on socket 0 00:05:37.898 EAL: Detected lcore 9 as core 0 on socket 0 00:05:37.898 EAL: Maximum logical cores by configuration: 128 00:05:37.898 EAL: Detected CPU lcores: 10 00:05:37.898 EAL: Detected NUMA nodes: 1 00:05:37.898 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:37.898 EAL: Detected shared linkage of DPDK 00:05:37.898 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:37.898 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:37.898 EAL: Registered [vdev] bus. 00:05:37.898 EAL: bus.vdev log level changed from disabled to notice 00:05:37.898 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:37.898 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:37.898 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:37.898 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:37.898 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:37.898 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:37.898 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:37.898 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:37.898 EAL: No shared files mode enabled, IPC will be disabled 00:05:37.898 EAL: No shared files mode enabled, IPC is disabled 00:05:37.898 EAL: Selected IOVA mode 'PA' 00:05:37.898 EAL: Probing VFIO support... 00:05:37.898 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:37.898 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:37.898 EAL: Ask a virtual area of 0x2e000 bytes 00:05:37.898 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:37.898 EAL: Setting up physically contiguous memory... 00:05:37.898 EAL: Setting maximum number of open files to 524288 00:05:37.898 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:37.898 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:37.898 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.898 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:37.898 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.898 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.898 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:37.898 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:37.898 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.898 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:37.898 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.898 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.898 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:37.898 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:37.898 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.898 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:37.898 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.898 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.898 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:37.898 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:37.898 EAL: Ask a virtual area of 0x61000 bytes 00:05:37.898 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:37.898 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:37.898 EAL: Ask a virtual area of 0x400000000 bytes 00:05:37.898 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:37.898 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:37.898 EAL: Hugepages will be freed exactly as allocated. 00:05:37.898 EAL: No shared files mode enabled, IPC is disabled 00:05:37.898 EAL: No shared files mode enabled, IPC is disabled 00:05:38.158 EAL: TSC frequency is ~2200000 KHz 00:05:38.158 EAL: Main lcore 0 is ready (tid=7fb66761fa00;cpuset=[0]) 00:05:38.158 EAL: Trying to obtain current memory policy. 00:05:38.158 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.158 EAL: Restoring previous memory policy: 0 00:05:38.158 EAL: request: mp_malloc_sync 00:05:38.158 EAL: No shared files mode enabled, IPC is disabled 00:05:38.158 EAL: Heap on socket 0 was expanded by 2MB 00:05:38.158 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:38.158 EAL: No shared files mode enabled, IPC is disabled 00:05:38.158 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:38.158 EAL: Mem event callback 'spdk:(nil)' registered 00:05:38.158 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:38.158 00:05:38.158 00:05:38.158 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.158 http://cunit.sourceforge.net/ 00:05:38.158 00:05:38.158 00:05:38.158 Suite: components_suite 00:05:38.158 Test: vtophys_malloc_test ...passed 00:05:38.158 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:38.158 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.158 EAL: Restoring previous memory policy: 4 00:05:38.158 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.158 EAL: request: mp_malloc_sync 00:05:38.158 EAL: No shared files mode enabled, IPC is disabled 00:05:38.158 EAL: Heap on socket 0 was expanded by 4MB 00:05:38.158 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.158 EAL: request: mp_malloc_sync 00:05:38.158 EAL: No shared files mode enabled, IPC is disabled 00:05:38.158 EAL: Heap on socket 0 was shrunk by 4MB 00:05:38.158 EAL: Trying to obtain current memory policy. 00:05:38.158 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.158 EAL: Restoring previous memory policy: 4 00:05:38.158 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.158 EAL: request: mp_malloc_sync 00:05:38.158 EAL: No shared files mode enabled, IPC is disabled 00:05:38.158 EAL: Heap on socket 0 was expanded by 6MB 00:05:38.158 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.158 EAL: request: mp_malloc_sync 00:05:38.158 EAL: No shared files mode enabled, IPC is disabled 00:05:38.158 EAL: Heap on socket 0 was shrunk by 6MB 00:05:38.158 EAL: Trying to obtain current memory policy. 00:05:38.158 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.158 EAL: Restoring previous memory policy: 4 00:05:38.158 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.158 EAL: request: mp_malloc_sync 00:05:38.158 EAL: No shared files mode enabled, IPC is disabled 00:05:38.158 EAL: Heap on socket 0 was expanded by 10MB 00:05:38.158 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.158 EAL: request: mp_malloc_sync 00:05:38.158 EAL: No shared files mode enabled, IPC is disabled 00:05:38.158 EAL: Heap on socket 0 was shrunk by 10MB 00:05:38.158 EAL: Trying to obtain current memory policy. 00:05:38.158 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.158 EAL: Restoring previous memory policy: 4 00:05:38.158 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.158 EAL: request: mp_malloc_sync 00:05:38.158 EAL: No shared files mode enabled, IPC is disabled 00:05:38.158 EAL: Heap on socket 0 was expanded by 18MB 00:05:38.158 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.158 EAL: request: mp_malloc_sync 00:05:38.158 EAL: No shared files mode enabled, IPC is disabled 00:05:38.158 EAL: Heap on socket 0 was shrunk by 18MB 00:05:38.158 EAL: Trying to obtain current memory policy. 00:05:38.158 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.158 EAL: Restoring previous memory policy: 4 00:05:38.158 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.158 EAL: request: mp_malloc_sync 00:05:38.158 EAL: No shared files mode enabled, IPC is disabled 00:05:38.158 EAL: Heap on socket 0 was expanded by 34MB 00:05:38.158 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.158 EAL: request: mp_malloc_sync 00:05:38.158 EAL: No shared files mode enabled, IPC is disabled 00:05:38.158 EAL: Heap on socket 0 was shrunk by 34MB 00:05:38.158 EAL: Trying to obtain current memory policy. 00:05:38.158 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.158 EAL: Restoring previous memory policy: 4 00:05:38.158 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.158 EAL: request: mp_malloc_sync 00:05:38.158 EAL: No shared files mode enabled, IPC is disabled 00:05:38.158 EAL: Heap on socket 0 was expanded by 66MB 00:05:38.158 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.158 EAL: request: mp_malloc_sync 00:05:38.158 EAL: No shared files mode enabled, IPC is disabled 00:05:38.158 EAL: Heap on socket 0 was shrunk by 66MB 00:05:38.158 EAL: Trying to obtain current memory policy. 00:05:38.158 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.158 EAL: Restoring previous memory policy: 4 00:05:38.158 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.158 EAL: request: mp_malloc_sync 00:05:38.158 EAL: No shared files mode enabled, IPC is disabled 00:05:38.158 EAL: Heap on socket 0 was expanded by 130MB 00:05:38.158 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.158 EAL: request: mp_malloc_sync 00:05:38.158 EAL: No shared files mode enabled, IPC is disabled 00:05:38.158 EAL: Heap on socket 0 was shrunk by 130MB 00:05:38.158 EAL: Trying to obtain current memory policy. 00:05:38.158 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.418 EAL: Restoring previous memory policy: 4 00:05:38.418 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.418 EAL: request: mp_malloc_sync 00:05:38.418 EAL: No shared files mode enabled, IPC is disabled 00:05:38.418 EAL: Heap on socket 0 was expanded by 258MB 00:05:38.418 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.418 EAL: request: mp_malloc_sync 00:05:38.418 EAL: No shared files mode enabled, IPC is disabled 00:05:38.418 EAL: Heap on socket 0 was shrunk by 258MB 00:05:38.418 EAL: Trying to obtain current memory policy. 00:05:38.418 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.418 EAL: Restoring previous memory policy: 4 00:05:38.418 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.418 EAL: request: mp_malloc_sync 00:05:38.418 EAL: No shared files mode enabled, IPC is disabled 00:05:38.418 EAL: Heap on socket 0 was expanded by 514MB 00:05:38.697 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.697 EAL: request: mp_malloc_sync 00:05:38.697 EAL: No shared files mode enabled, IPC is disabled 00:05:38.697 EAL: Heap on socket 0 was shrunk by 514MB 00:05:38.697 EAL: Trying to obtain current memory policy. 00:05:38.697 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:38.955 EAL: Restoring previous memory policy: 4 00:05:38.955 EAL: Calling mem event callback 'spdk:(nil)' 00:05:38.955 EAL: request: mp_malloc_sync 00:05:38.955 EAL: No shared files mode enabled, IPC is disabled 00:05:38.955 EAL: Heap on socket 0 was expanded by 1026MB 00:05:39.213 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.471 passed 00:05:39.471 00:05:39.471 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.471 suites 1 1 n/a 0 0 00:05:39.471 tests 2 2 2 0 0 00:05:39.471 asserts 5302 5302 5302 0 n/a 00:05:39.471 00:05:39.471 Elapsed time = 1.254 seconds 00:05:39.471 EAL: request: mp_malloc_sync 00:05:39.471 EAL: No shared files mode enabled, IPC is disabled 00:05:39.471 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:39.471 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.471 EAL: request: mp_malloc_sync 00:05:39.471 EAL: No shared files mode enabled, IPC is disabled 00:05:39.471 EAL: Heap on socket 0 was shrunk by 2MB 00:05:39.471 EAL: No shared files mode enabled, IPC is disabled 00:05:39.471 EAL: No shared files mode enabled, IPC is disabled 00:05:39.471 EAL: No shared files mode enabled, IPC is disabled 00:05:39.471 00:05:39.471 real 0m1.448s 00:05:39.471 user 0m0.800s 00:05:39.471 sys 0m0.516s 00:05:39.471 09:58:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.471 09:58:09 -- common/autotest_common.sh@10 -- # set +x 00:05:39.471 ************************************ 00:05:39.471 END TEST env_vtophys 00:05:39.471 ************************************ 00:05:39.471 09:58:09 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:39.471 09:58:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.471 09:58:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.471 09:58:09 -- common/autotest_common.sh@10 -- # set +x 00:05:39.471 ************************************ 00:05:39.471 START TEST env_pci 00:05:39.471 ************************************ 00:05:39.471 09:58:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:39.471 00:05:39.471 00:05:39.471 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.471 http://cunit.sourceforge.net/ 00:05:39.471 00:05:39.471 00:05:39.471 Suite: pci 00:05:39.471 Test: pci_hook ...[2024-07-14 09:58:09.896189] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 67404 has claimed it 00:05:39.471 passed 00:05:39.471 00:05:39.471 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.471 suites 1 1 n/a 0 0 00:05:39.471 tests 1 1 1 0 0 00:05:39.471 asserts 25 25 25 0 n/a 00:05:39.471 00:05:39.471 Elapsed time = 0.002 seconds 00:05:39.471 EAL: Cannot find device (10000:00:01.0) 00:05:39.471 EAL: Failed to attach device on primary process 00:05:39.471 00:05:39.471 real 0m0.021s 00:05:39.471 user 0m0.009s 00:05:39.471 sys 0m0.012s 00:05:39.471 09:58:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.471 09:58:09 -- common/autotest_common.sh@10 -- # set +x 00:05:39.471 ************************************ 00:05:39.471 END TEST env_pci 00:05:39.471 ************************************ 00:05:39.471 09:58:09 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:39.471 09:58:09 -- env/env.sh@15 -- # uname 00:05:39.471 09:58:09 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:39.471 09:58:09 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:39.471 09:58:09 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:39.471 09:58:09 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:39.471 09:58:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.471 09:58:09 -- common/autotest_common.sh@10 -- # set +x 00:05:39.471 ************************************ 00:05:39.471 START TEST env_dpdk_post_init 00:05:39.471 ************************************ 00:05:39.472 09:58:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:39.472 EAL: Detected CPU lcores: 10 00:05:39.472 EAL: Detected NUMA nodes: 1 00:05:39.472 EAL: Detected shared linkage of DPDK 00:05:39.472 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:39.472 EAL: Selected IOVA mode 'PA' 00:05:39.728 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:39.729 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:39.729 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:39.729 Starting DPDK initialization... 00:05:39.729 Starting SPDK post initialization... 00:05:39.729 SPDK NVMe probe 00:05:39.729 Attaching to 0000:00:06.0 00:05:39.729 Attaching to 0000:00:07.0 00:05:39.729 Attached to 0000:00:06.0 00:05:39.729 Attached to 0000:00:07.0 00:05:39.729 Cleaning up... 00:05:39.729 00:05:39.729 real 0m0.168s 00:05:39.729 user 0m0.037s 00:05:39.729 sys 0m0.031s 00:05:39.729 09:58:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.729 09:58:10 -- common/autotest_common.sh@10 -- # set +x 00:05:39.729 ************************************ 00:05:39.729 END TEST env_dpdk_post_init 00:05:39.729 ************************************ 00:05:39.729 09:58:10 -- env/env.sh@26 -- # uname 00:05:39.729 09:58:10 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:39.729 09:58:10 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:39.729 09:58:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.729 09:58:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.729 09:58:10 -- common/autotest_common.sh@10 -- # set +x 00:05:39.729 ************************************ 00:05:39.729 START TEST env_mem_callbacks 00:05:39.729 ************************************ 00:05:39.729 09:58:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:39.729 EAL: Detected CPU lcores: 10 00:05:39.729 EAL: Detected NUMA nodes: 1 00:05:39.729 EAL: Detected shared linkage of DPDK 00:05:39.729 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:39.729 EAL: Selected IOVA mode 'PA' 00:05:39.987 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:39.987 00:05:39.987 00:05:39.987 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.987 http://cunit.sourceforge.net/ 00:05:39.987 00:05:39.987 00:05:39.987 Suite: memory 00:05:39.987 Test: test ... 00:05:39.987 register 0x200000200000 2097152 00:05:39.987 malloc 3145728 00:05:39.987 register 0x200000400000 4194304 00:05:39.987 buf 0x200000500000 len 3145728 PASSED 00:05:39.987 malloc 64 00:05:39.987 buf 0x2000004fff40 len 64 PASSED 00:05:39.987 malloc 4194304 00:05:39.987 register 0x200000800000 6291456 00:05:39.987 buf 0x200000a00000 len 4194304 PASSED 00:05:39.987 free 0x200000500000 3145728 00:05:39.987 free 0x2000004fff40 64 00:05:39.987 unregister 0x200000400000 4194304 PASSED 00:05:39.987 free 0x200000a00000 4194304 00:05:39.987 unregister 0x200000800000 6291456 PASSED 00:05:39.987 malloc 8388608 00:05:39.987 register 0x200000400000 10485760 00:05:39.987 buf 0x200000600000 len 8388608 PASSED 00:05:39.987 free 0x200000600000 8388608 00:05:39.987 unregister 0x200000400000 10485760 PASSED 00:05:39.987 passed 00:05:39.987 00:05:39.987 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.987 suites 1 1 n/a 0 0 00:05:39.987 tests 1 1 1 0 0 00:05:39.987 asserts 15 15 15 0 n/a 00:05:39.987 00:05:39.987 Elapsed time = 0.009 seconds 00:05:39.987 00:05:39.987 real 0m0.145s 00:05:39.987 user 0m0.020s 00:05:39.987 sys 0m0.024s 00:05:39.987 09:58:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.987 09:58:10 -- common/autotest_common.sh@10 -- # set +x 00:05:39.987 ************************************ 00:05:39.987 END TEST env_mem_callbacks 00:05:39.987 ************************************ 00:05:39.987 00:05:39.987 real 0m2.373s 00:05:39.987 user 0m1.222s 00:05:39.987 sys 0m0.800s 00:05:39.987 09:58:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.987 09:58:10 -- common/autotest_common.sh@10 -- # set +x 00:05:39.987 ************************************ 00:05:39.987 END TEST env 00:05:39.987 ************************************ 00:05:39.987 09:58:10 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:39.987 09:58:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.987 09:58:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.987 09:58:10 -- common/autotest_common.sh@10 -- # set +x 00:05:39.987 ************************************ 00:05:39.987 START TEST rpc 00:05:39.987 ************************************ 00:05:39.987 09:58:10 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:39.987 * Looking for test storage... 00:05:39.987 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:39.987 09:58:10 -- rpc/rpc.sh@65 -- # spdk_pid=67512 00:05:39.987 09:58:10 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.987 09:58:10 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:39.987 09:58:10 -- rpc/rpc.sh@67 -- # waitforlisten 67512 00:05:39.987 09:58:10 -- common/autotest_common.sh@819 -- # '[' -z 67512 ']' 00:05:39.987 09:58:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.987 09:58:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:39.987 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.987 09:58:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.987 09:58:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:39.987 09:58:10 -- common/autotest_common.sh@10 -- # set +x 00:05:40.274 [2024-07-14 09:58:10.562743] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:40.274 [2024-07-14 09:58:10.562875] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67512 ] 00:05:40.274 [2024-07-14 09:58:10.704193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.274 [2024-07-14 09:58:10.758825] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:40.274 [2024-07-14 09:58:10.758995] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:40.274 [2024-07-14 09:58:10.759012] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 67512' to capture a snapshot of events at runtime. 00:05:40.274 [2024-07-14 09:58:10.759020] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid67512 for offline analysis/debug. 00:05:40.274 [2024-07-14 09:58:10.759045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.234 09:58:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:41.234 09:58:11 -- common/autotest_common.sh@852 -- # return 0 00:05:41.234 09:58:11 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:41.234 09:58:11 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:41.234 09:58:11 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:41.234 09:58:11 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:41.234 09:58:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:41.234 09:58:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.234 09:58:11 -- common/autotest_common.sh@10 -- # set +x 00:05:41.234 ************************************ 00:05:41.234 START TEST rpc_integrity 00:05:41.234 ************************************ 00:05:41.234 09:58:11 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:41.234 09:58:11 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:41.234 09:58:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.234 09:58:11 -- common/autotest_common.sh@10 -- # set +x 00:05:41.234 09:58:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.234 09:58:11 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:41.234 09:58:11 -- rpc/rpc.sh@13 -- # jq length 00:05:41.234 09:58:11 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:41.234 09:58:11 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:41.234 09:58:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.234 09:58:11 -- common/autotest_common.sh@10 -- # set +x 00:05:41.234 09:58:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.234 09:58:11 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:41.234 09:58:11 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:41.234 09:58:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.234 09:58:11 -- common/autotest_common.sh@10 -- # set +x 00:05:41.234 09:58:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.234 09:58:11 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:41.234 { 00:05:41.234 "aliases": [ 00:05:41.234 "46fb21ec-8306-4c71-9267-a8f3b8af62ef" 00:05:41.234 ], 00:05:41.234 "assigned_rate_limits": { 00:05:41.234 "r_mbytes_per_sec": 0, 00:05:41.234 "rw_ios_per_sec": 0, 00:05:41.234 "rw_mbytes_per_sec": 0, 00:05:41.234 "w_mbytes_per_sec": 0 00:05:41.234 }, 00:05:41.234 "block_size": 512, 00:05:41.234 "claimed": false, 00:05:41.234 "driver_specific": {}, 00:05:41.234 "memory_domains": [ 00:05:41.234 { 00:05:41.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.234 "dma_device_type": 2 00:05:41.234 } 00:05:41.234 ], 00:05:41.234 "name": "Malloc0", 00:05:41.234 "num_blocks": 16384, 00:05:41.234 "product_name": "Malloc disk", 00:05:41.234 "supported_io_types": { 00:05:41.234 "abort": true, 00:05:41.234 "compare": false, 00:05:41.234 "compare_and_write": false, 00:05:41.234 "flush": true, 00:05:41.234 "nvme_admin": false, 00:05:41.234 "nvme_io": false, 00:05:41.234 "read": true, 00:05:41.234 "reset": true, 00:05:41.234 "unmap": true, 00:05:41.234 "write": true, 00:05:41.234 "write_zeroes": true 00:05:41.234 }, 00:05:41.234 "uuid": "46fb21ec-8306-4c71-9267-a8f3b8af62ef", 00:05:41.234 "zoned": false 00:05:41.234 } 00:05:41.234 ]' 00:05:41.234 09:58:11 -- rpc/rpc.sh@17 -- # jq length 00:05:41.234 09:58:11 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:41.234 09:58:11 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:41.234 09:58:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.234 09:58:11 -- common/autotest_common.sh@10 -- # set +x 00:05:41.234 [2024-07-14 09:58:11.651864] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:41.234 [2024-07-14 09:58:11.651922] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:41.234 [2024-07-14 09:58:11.651938] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x90f0b0 00:05:41.234 [2024-07-14 09:58:11.651947] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:41.234 [2024-07-14 09:58:11.653402] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:41.234 [2024-07-14 09:58:11.653437] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:41.234 Passthru0 00:05:41.234 09:58:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.234 09:58:11 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:41.234 09:58:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.234 09:58:11 -- common/autotest_common.sh@10 -- # set +x 00:05:41.234 09:58:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.234 09:58:11 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:41.234 { 00:05:41.234 "aliases": [ 00:05:41.234 "46fb21ec-8306-4c71-9267-a8f3b8af62ef" 00:05:41.234 ], 00:05:41.234 "assigned_rate_limits": { 00:05:41.234 "r_mbytes_per_sec": 0, 00:05:41.234 "rw_ios_per_sec": 0, 00:05:41.234 "rw_mbytes_per_sec": 0, 00:05:41.234 "w_mbytes_per_sec": 0 00:05:41.234 }, 00:05:41.234 "block_size": 512, 00:05:41.234 "claim_type": "exclusive_write", 00:05:41.234 "claimed": true, 00:05:41.234 "driver_specific": {}, 00:05:41.234 "memory_domains": [ 00:05:41.234 { 00:05:41.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.234 "dma_device_type": 2 00:05:41.234 } 00:05:41.234 ], 00:05:41.234 "name": "Malloc0", 00:05:41.234 "num_blocks": 16384, 00:05:41.234 "product_name": "Malloc disk", 00:05:41.234 "supported_io_types": { 00:05:41.234 "abort": true, 00:05:41.234 "compare": false, 00:05:41.234 "compare_and_write": false, 00:05:41.234 "flush": true, 00:05:41.234 "nvme_admin": false, 00:05:41.234 "nvme_io": false, 00:05:41.234 "read": true, 00:05:41.234 "reset": true, 00:05:41.234 "unmap": true, 00:05:41.234 "write": true, 00:05:41.234 "write_zeroes": true 00:05:41.234 }, 00:05:41.234 "uuid": "46fb21ec-8306-4c71-9267-a8f3b8af62ef", 00:05:41.234 "zoned": false 00:05:41.234 }, 00:05:41.234 { 00:05:41.234 "aliases": [ 00:05:41.234 "8e41b665-a360-54c7-ad8a-5cc49daada19" 00:05:41.234 ], 00:05:41.234 "assigned_rate_limits": { 00:05:41.234 "r_mbytes_per_sec": 0, 00:05:41.234 "rw_ios_per_sec": 0, 00:05:41.234 "rw_mbytes_per_sec": 0, 00:05:41.234 "w_mbytes_per_sec": 0 00:05:41.234 }, 00:05:41.234 "block_size": 512, 00:05:41.234 "claimed": false, 00:05:41.234 "driver_specific": { 00:05:41.234 "passthru": { 00:05:41.234 "base_bdev_name": "Malloc0", 00:05:41.234 "name": "Passthru0" 00:05:41.234 } 00:05:41.234 }, 00:05:41.234 "memory_domains": [ 00:05:41.234 { 00:05:41.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.234 "dma_device_type": 2 00:05:41.234 } 00:05:41.234 ], 00:05:41.235 "name": "Passthru0", 00:05:41.235 "num_blocks": 16384, 00:05:41.235 "product_name": "passthru", 00:05:41.235 "supported_io_types": { 00:05:41.235 "abort": true, 00:05:41.235 "compare": false, 00:05:41.235 "compare_and_write": false, 00:05:41.235 "flush": true, 00:05:41.235 "nvme_admin": false, 00:05:41.235 "nvme_io": false, 00:05:41.235 "read": true, 00:05:41.235 "reset": true, 00:05:41.235 "unmap": true, 00:05:41.235 "write": true, 00:05:41.235 "write_zeroes": true 00:05:41.235 }, 00:05:41.235 "uuid": "8e41b665-a360-54c7-ad8a-5cc49daada19", 00:05:41.235 "zoned": false 00:05:41.235 } 00:05:41.235 ]' 00:05:41.235 09:58:11 -- rpc/rpc.sh@21 -- # jq length 00:05:41.235 09:58:11 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:41.235 09:58:11 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:41.235 09:58:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.235 09:58:11 -- common/autotest_common.sh@10 -- # set +x 00:05:41.235 09:58:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.235 09:58:11 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:41.235 09:58:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.235 09:58:11 -- common/autotest_common.sh@10 -- # set +x 00:05:41.235 09:58:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.235 09:58:11 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:41.235 09:58:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.235 09:58:11 -- common/autotest_common.sh@10 -- # set +x 00:05:41.235 09:58:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.235 09:58:11 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:41.235 09:58:11 -- rpc/rpc.sh@26 -- # jq length 00:05:41.493 09:58:11 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:41.493 00:05:41.493 real 0m0.319s 00:05:41.493 user 0m0.208s 00:05:41.493 sys 0m0.037s 00:05:41.493 09:58:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.493 09:58:11 -- common/autotest_common.sh@10 -- # set +x 00:05:41.493 ************************************ 00:05:41.493 END TEST rpc_integrity 00:05:41.493 ************************************ 00:05:41.493 09:58:11 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:41.493 09:58:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:41.493 09:58:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.493 09:58:11 -- common/autotest_common.sh@10 -- # set +x 00:05:41.493 ************************************ 00:05:41.493 START TEST rpc_plugins 00:05:41.493 ************************************ 00:05:41.493 09:58:11 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:41.493 09:58:11 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:41.493 09:58:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.493 09:58:11 -- common/autotest_common.sh@10 -- # set +x 00:05:41.493 09:58:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.493 09:58:11 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:41.493 09:58:11 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:41.493 09:58:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.493 09:58:11 -- common/autotest_common.sh@10 -- # set +x 00:05:41.493 09:58:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.493 09:58:11 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:41.493 { 00:05:41.493 "aliases": [ 00:05:41.493 "ae3efdaf-3a1d-4f51-b73b-7c03d87f84a4" 00:05:41.493 ], 00:05:41.493 "assigned_rate_limits": { 00:05:41.493 "r_mbytes_per_sec": 0, 00:05:41.493 "rw_ios_per_sec": 0, 00:05:41.493 "rw_mbytes_per_sec": 0, 00:05:41.493 "w_mbytes_per_sec": 0 00:05:41.493 }, 00:05:41.493 "block_size": 4096, 00:05:41.493 "claimed": false, 00:05:41.493 "driver_specific": {}, 00:05:41.493 "memory_domains": [ 00:05:41.493 { 00:05:41.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:41.493 "dma_device_type": 2 00:05:41.493 } 00:05:41.493 ], 00:05:41.493 "name": "Malloc1", 00:05:41.493 "num_blocks": 256, 00:05:41.493 "product_name": "Malloc disk", 00:05:41.493 "supported_io_types": { 00:05:41.493 "abort": true, 00:05:41.493 "compare": false, 00:05:41.493 "compare_and_write": false, 00:05:41.493 "flush": true, 00:05:41.493 "nvme_admin": false, 00:05:41.493 "nvme_io": false, 00:05:41.493 "read": true, 00:05:41.493 "reset": true, 00:05:41.493 "unmap": true, 00:05:41.493 "write": true, 00:05:41.493 "write_zeroes": true 00:05:41.493 }, 00:05:41.493 "uuid": "ae3efdaf-3a1d-4f51-b73b-7c03d87f84a4", 00:05:41.493 "zoned": false 00:05:41.493 } 00:05:41.493 ]' 00:05:41.493 09:58:11 -- rpc/rpc.sh@32 -- # jq length 00:05:41.493 09:58:11 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:41.493 09:58:11 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:41.493 09:58:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.493 09:58:11 -- common/autotest_common.sh@10 -- # set +x 00:05:41.493 09:58:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.493 09:58:11 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:41.493 09:58:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.493 09:58:11 -- common/autotest_common.sh@10 -- # set +x 00:05:41.493 09:58:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.493 09:58:11 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:41.493 09:58:11 -- rpc/rpc.sh@36 -- # jq length 00:05:41.493 09:58:12 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:41.493 00:05:41.493 real 0m0.155s 00:05:41.493 user 0m0.098s 00:05:41.493 sys 0m0.021s 00:05:41.493 09:58:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.493 09:58:12 -- common/autotest_common.sh@10 -- # set +x 00:05:41.493 ************************************ 00:05:41.493 END TEST rpc_plugins 00:05:41.493 ************************************ 00:05:41.751 09:58:12 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:41.751 09:58:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:41.751 09:58:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.751 09:58:12 -- common/autotest_common.sh@10 -- # set +x 00:05:41.751 ************************************ 00:05:41.751 START TEST rpc_trace_cmd_test 00:05:41.751 ************************************ 00:05:41.751 09:58:12 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:41.751 09:58:12 -- rpc/rpc.sh@40 -- # local info 00:05:41.751 09:58:12 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:41.751 09:58:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.751 09:58:12 -- common/autotest_common.sh@10 -- # set +x 00:05:41.751 09:58:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.751 09:58:12 -- rpc/rpc.sh@42 -- # info='{ 00:05:41.751 "bdev": { 00:05:41.751 "mask": "0x8", 00:05:41.751 "tpoint_mask": "0xffffffffffffffff" 00:05:41.751 }, 00:05:41.751 "bdev_nvme": { 00:05:41.751 "mask": "0x4000", 00:05:41.751 "tpoint_mask": "0x0" 00:05:41.751 }, 00:05:41.751 "blobfs": { 00:05:41.751 "mask": "0x80", 00:05:41.751 "tpoint_mask": "0x0" 00:05:41.751 }, 00:05:41.751 "dsa": { 00:05:41.751 "mask": "0x200", 00:05:41.751 "tpoint_mask": "0x0" 00:05:41.751 }, 00:05:41.751 "ftl": { 00:05:41.751 "mask": "0x40", 00:05:41.751 "tpoint_mask": "0x0" 00:05:41.751 }, 00:05:41.751 "iaa": { 00:05:41.751 "mask": "0x1000", 00:05:41.751 "tpoint_mask": "0x0" 00:05:41.751 }, 00:05:41.751 "iscsi_conn": { 00:05:41.751 "mask": "0x2", 00:05:41.751 "tpoint_mask": "0x0" 00:05:41.751 }, 00:05:41.751 "nvme_pcie": { 00:05:41.751 "mask": "0x800", 00:05:41.751 "tpoint_mask": "0x0" 00:05:41.751 }, 00:05:41.751 "nvme_tcp": { 00:05:41.751 "mask": "0x2000", 00:05:41.751 "tpoint_mask": "0x0" 00:05:41.751 }, 00:05:41.751 "nvmf_rdma": { 00:05:41.751 "mask": "0x10", 00:05:41.751 "tpoint_mask": "0x0" 00:05:41.751 }, 00:05:41.751 "nvmf_tcp": { 00:05:41.751 "mask": "0x20", 00:05:41.751 "tpoint_mask": "0x0" 00:05:41.751 }, 00:05:41.751 "scsi": { 00:05:41.751 "mask": "0x4", 00:05:41.751 "tpoint_mask": "0x0" 00:05:41.751 }, 00:05:41.751 "thread": { 00:05:41.751 "mask": "0x400", 00:05:41.751 "tpoint_mask": "0x0" 00:05:41.751 }, 00:05:41.751 "tpoint_group_mask": "0x8", 00:05:41.751 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid67512" 00:05:41.751 }' 00:05:41.751 09:58:12 -- rpc/rpc.sh@43 -- # jq length 00:05:41.751 09:58:12 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:41.751 09:58:12 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:41.751 09:58:12 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:41.751 09:58:12 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:41.751 09:58:12 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:41.751 09:58:12 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:41.751 09:58:12 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:41.751 09:58:12 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:42.011 09:58:12 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:42.011 00:05:42.011 real 0m0.267s 00:05:42.011 user 0m0.228s 00:05:42.011 sys 0m0.028s 00:05:42.011 09:58:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.011 09:58:12 -- common/autotest_common.sh@10 -- # set +x 00:05:42.011 ************************************ 00:05:42.011 END TEST rpc_trace_cmd_test 00:05:42.011 ************************************ 00:05:42.011 09:58:12 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:05:42.011 09:58:12 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:05:42.011 09:58:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.011 09:58:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.011 09:58:12 -- common/autotest_common.sh@10 -- # set +x 00:05:42.011 ************************************ 00:05:42.011 START TEST go_rpc 00:05:42.011 ************************************ 00:05:42.011 09:58:12 -- common/autotest_common.sh@1104 -- # go_rpc 00:05:42.011 09:58:12 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:42.011 09:58:12 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:05:42.011 09:58:12 -- rpc/rpc.sh@52 -- # jq length 00:05:42.011 09:58:12 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:05:42.011 09:58:12 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:05:42.011 09:58:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.011 09:58:12 -- common/autotest_common.sh@10 -- # set +x 00:05:42.011 09:58:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.011 09:58:12 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:05:42.011 09:58:12 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:42.011 09:58:12 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["86f9dd53-f369-4cfe-9be6-53f4f71349bd"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"86f9dd53-f369-4cfe-9be6-53f4f71349bd","zoned":false}]' 00:05:42.011 09:58:12 -- rpc/rpc.sh@57 -- # jq length 00:05:42.011 09:58:12 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:05:42.011 09:58:12 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:42.011 09:58:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.011 09:58:12 -- common/autotest_common.sh@10 -- # set +x 00:05:42.011 09:58:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.011 09:58:12 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:05:42.269 09:58:12 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:05:42.269 09:58:12 -- rpc/rpc.sh@61 -- # jq length 00:05:42.269 ************************************ 00:05:42.269 END TEST go_rpc 00:05:42.269 ************************************ 00:05:42.269 09:58:12 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:05:42.269 00:05:42.269 real 0m0.222s 00:05:42.269 user 0m0.151s 00:05:42.269 sys 0m0.032s 00:05:42.269 09:58:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.269 09:58:12 -- common/autotest_common.sh@10 -- # set +x 00:05:42.269 09:58:12 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:42.269 09:58:12 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:42.269 09:58:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.269 09:58:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.269 09:58:12 -- common/autotest_common.sh@10 -- # set +x 00:05:42.269 ************************************ 00:05:42.269 START TEST rpc_daemon_integrity 00:05:42.269 ************************************ 00:05:42.269 09:58:12 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:42.269 09:58:12 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:42.269 09:58:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.269 09:58:12 -- common/autotest_common.sh@10 -- # set +x 00:05:42.269 09:58:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.269 09:58:12 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:42.269 09:58:12 -- rpc/rpc.sh@13 -- # jq length 00:05:42.269 09:58:12 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:42.269 09:58:12 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:42.270 09:58:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.270 09:58:12 -- common/autotest_common.sh@10 -- # set +x 00:05:42.270 09:58:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.270 09:58:12 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:05:42.270 09:58:12 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:42.270 09:58:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.270 09:58:12 -- common/autotest_common.sh@10 -- # set +x 00:05:42.270 09:58:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.270 09:58:12 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:42.270 { 00:05:42.270 "aliases": [ 00:05:42.270 "15a71f9a-f63c-419c-a26c-9b40d985d76a" 00:05:42.270 ], 00:05:42.270 "assigned_rate_limits": { 00:05:42.270 "r_mbytes_per_sec": 0, 00:05:42.270 "rw_ios_per_sec": 0, 00:05:42.270 "rw_mbytes_per_sec": 0, 00:05:42.270 "w_mbytes_per_sec": 0 00:05:42.270 }, 00:05:42.270 "block_size": 512, 00:05:42.270 "claimed": false, 00:05:42.270 "driver_specific": {}, 00:05:42.270 "memory_domains": [ 00:05:42.270 { 00:05:42.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.270 "dma_device_type": 2 00:05:42.270 } 00:05:42.270 ], 00:05:42.270 "name": "Malloc3", 00:05:42.270 "num_blocks": 16384, 00:05:42.270 "product_name": "Malloc disk", 00:05:42.270 "supported_io_types": { 00:05:42.270 "abort": true, 00:05:42.270 "compare": false, 00:05:42.270 "compare_and_write": false, 00:05:42.270 "flush": true, 00:05:42.270 "nvme_admin": false, 00:05:42.270 "nvme_io": false, 00:05:42.270 "read": true, 00:05:42.270 "reset": true, 00:05:42.270 "unmap": true, 00:05:42.270 "write": true, 00:05:42.270 "write_zeroes": true 00:05:42.270 }, 00:05:42.270 "uuid": "15a71f9a-f63c-419c-a26c-9b40d985d76a", 00:05:42.270 "zoned": false 00:05:42.270 } 00:05:42.270 ]' 00:05:42.270 09:58:12 -- rpc/rpc.sh@17 -- # jq length 00:05:42.270 09:58:12 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:42.270 09:58:12 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:05:42.270 09:58:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.270 09:58:12 -- common/autotest_common.sh@10 -- # set +x 00:05:42.270 [2024-07-14 09:58:12.800822] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:42.270 [2024-07-14 09:58:12.800874] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:42.270 [2024-07-14 09:58:12.800890] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xaaeb50 00:05:42.270 [2024-07-14 09:58:12.800898] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:42.270 [2024-07-14 09:58:12.802302] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:42.270 [2024-07-14 09:58:12.802342] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:42.270 Passthru0 00:05:42.270 09:58:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.270 09:58:12 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:42.270 09:58:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.270 09:58:12 -- common/autotest_common.sh@10 -- # set +x 00:05:42.528 09:58:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.528 09:58:12 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:42.528 { 00:05:42.528 "aliases": [ 00:05:42.528 "15a71f9a-f63c-419c-a26c-9b40d985d76a" 00:05:42.528 ], 00:05:42.528 "assigned_rate_limits": { 00:05:42.528 "r_mbytes_per_sec": 0, 00:05:42.528 "rw_ios_per_sec": 0, 00:05:42.528 "rw_mbytes_per_sec": 0, 00:05:42.528 "w_mbytes_per_sec": 0 00:05:42.528 }, 00:05:42.528 "block_size": 512, 00:05:42.528 "claim_type": "exclusive_write", 00:05:42.528 "claimed": true, 00:05:42.528 "driver_specific": {}, 00:05:42.528 "memory_domains": [ 00:05:42.528 { 00:05:42.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.528 "dma_device_type": 2 00:05:42.528 } 00:05:42.528 ], 00:05:42.528 "name": "Malloc3", 00:05:42.528 "num_blocks": 16384, 00:05:42.528 "product_name": "Malloc disk", 00:05:42.528 "supported_io_types": { 00:05:42.528 "abort": true, 00:05:42.528 "compare": false, 00:05:42.528 "compare_and_write": false, 00:05:42.528 "flush": true, 00:05:42.528 "nvme_admin": false, 00:05:42.528 "nvme_io": false, 00:05:42.528 "read": true, 00:05:42.528 "reset": true, 00:05:42.528 "unmap": true, 00:05:42.528 "write": true, 00:05:42.528 "write_zeroes": true 00:05:42.528 }, 00:05:42.528 "uuid": "15a71f9a-f63c-419c-a26c-9b40d985d76a", 00:05:42.528 "zoned": false 00:05:42.528 }, 00:05:42.528 { 00:05:42.528 "aliases": [ 00:05:42.528 "f27a8912-12f7-5066-a16d-fd886b4f6c05" 00:05:42.528 ], 00:05:42.528 "assigned_rate_limits": { 00:05:42.528 "r_mbytes_per_sec": 0, 00:05:42.528 "rw_ios_per_sec": 0, 00:05:42.528 "rw_mbytes_per_sec": 0, 00:05:42.528 "w_mbytes_per_sec": 0 00:05:42.528 }, 00:05:42.528 "block_size": 512, 00:05:42.528 "claimed": false, 00:05:42.528 "driver_specific": { 00:05:42.528 "passthru": { 00:05:42.528 "base_bdev_name": "Malloc3", 00:05:42.528 "name": "Passthru0" 00:05:42.528 } 00:05:42.528 }, 00:05:42.528 "memory_domains": [ 00:05:42.528 { 00:05:42.528 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.528 "dma_device_type": 2 00:05:42.528 } 00:05:42.528 ], 00:05:42.528 "name": "Passthru0", 00:05:42.528 "num_blocks": 16384, 00:05:42.528 "product_name": "passthru", 00:05:42.528 "supported_io_types": { 00:05:42.528 "abort": true, 00:05:42.528 "compare": false, 00:05:42.528 "compare_and_write": false, 00:05:42.528 "flush": true, 00:05:42.528 "nvme_admin": false, 00:05:42.528 "nvme_io": false, 00:05:42.528 "read": true, 00:05:42.528 "reset": true, 00:05:42.528 "unmap": true, 00:05:42.528 "write": true, 00:05:42.528 "write_zeroes": true 00:05:42.528 }, 00:05:42.528 "uuid": "f27a8912-12f7-5066-a16d-fd886b4f6c05", 00:05:42.528 "zoned": false 00:05:42.528 } 00:05:42.528 ]' 00:05:42.528 09:58:12 -- rpc/rpc.sh@21 -- # jq length 00:05:42.528 09:58:12 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:42.528 09:58:12 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:42.528 09:58:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.528 09:58:12 -- common/autotest_common.sh@10 -- # set +x 00:05:42.528 09:58:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.528 09:58:12 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:05:42.528 09:58:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.528 09:58:12 -- common/autotest_common.sh@10 -- # set +x 00:05:42.528 09:58:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.528 09:58:12 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:42.529 09:58:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.529 09:58:12 -- common/autotest_common.sh@10 -- # set +x 00:05:42.529 09:58:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.529 09:58:12 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:42.529 09:58:12 -- rpc/rpc.sh@26 -- # jq length 00:05:42.529 09:58:12 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:42.529 00:05:42.529 real 0m0.321s 00:05:42.529 user 0m0.216s 00:05:42.529 sys 0m0.037s 00:05:42.529 09:58:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.529 09:58:12 -- common/autotest_common.sh@10 -- # set +x 00:05:42.529 ************************************ 00:05:42.529 END TEST rpc_daemon_integrity 00:05:42.529 ************************************ 00:05:42.529 09:58:13 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:42.529 09:58:13 -- rpc/rpc.sh@84 -- # killprocess 67512 00:05:42.529 09:58:13 -- common/autotest_common.sh@926 -- # '[' -z 67512 ']' 00:05:42.529 09:58:13 -- common/autotest_common.sh@930 -- # kill -0 67512 00:05:42.529 09:58:13 -- common/autotest_common.sh@931 -- # uname 00:05:42.529 09:58:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:42.529 09:58:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67512 00:05:42.529 09:58:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:42.529 09:58:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:42.529 killing process with pid 67512 00:05:42.529 09:58:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67512' 00:05:42.529 09:58:13 -- common/autotest_common.sh@945 -- # kill 67512 00:05:42.529 09:58:13 -- common/autotest_common.sh@950 -- # wait 67512 00:05:43.095 00:05:43.095 real 0m2.965s 00:05:43.095 user 0m3.906s 00:05:43.095 sys 0m0.712s 00:05:43.095 09:58:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.095 09:58:13 -- common/autotest_common.sh@10 -- # set +x 00:05:43.095 ************************************ 00:05:43.095 END TEST rpc 00:05:43.095 ************************************ 00:05:43.095 09:58:13 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:43.095 09:58:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.095 09:58:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.095 09:58:13 -- common/autotest_common.sh@10 -- # set +x 00:05:43.095 ************************************ 00:05:43.095 START TEST rpc_client 00:05:43.095 ************************************ 00:05:43.095 09:58:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:43.095 * Looking for test storage... 00:05:43.095 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:43.095 09:58:13 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:43.095 OK 00:05:43.095 09:58:13 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:43.095 00:05:43.095 real 0m0.105s 00:05:43.095 user 0m0.050s 00:05:43.095 sys 0m0.059s 00:05:43.095 09:58:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.095 09:58:13 -- common/autotest_common.sh@10 -- # set +x 00:05:43.095 ************************************ 00:05:43.095 END TEST rpc_client 00:05:43.095 ************************************ 00:05:43.095 09:58:13 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:43.095 09:58:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.095 09:58:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.095 09:58:13 -- common/autotest_common.sh@10 -- # set +x 00:05:43.095 ************************************ 00:05:43.095 START TEST json_config 00:05:43.095 ************************************ 00:05:43.095 09:58:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:43.095 09:58:13 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:43.095 09:58:13 -- nvmf/common.sh@7 -- # uname -s 00:05:43.095 09:58:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:43.095 09:58:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:43.095 09:58:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:43.095 09:58:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:43.096 09:58:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:43.096 09:58:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:43.096 09:58:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:43.096 09:58:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:43.096 09:58:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:43.096 09:58:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:43.354 09:58:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:05:43.354 09:58:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:05:43.354 09:58:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:43.354 09:58:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:43.354 09:58:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:43.354 09:58:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:43.354 09:58:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.354 09:58:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.354 09:58:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.354 09:58:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.354 09:58:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.354 09:58:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.354 09:58:13 -- paths/export.sh@5 -- # export PATH 00:05:43.354 09:58:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.354 09:58:13 -- nvmf/common.sh@46 -- # : 0 00:05:43.354 09:58:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:43.354 09:58:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:43.354 09:58:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:43.354 09:58:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:43.354 09:58:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:43.354 09:58:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:43.354 09:58:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:43.354 09:58:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:43.354 09:58:13 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:43.354 09:58:13 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:43.354 09:58:13 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:43.354 09:58:13 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:43.354 09:58:13 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:43.354 09:58:13 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:43.354 09:58:13 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:43.354 09:58:13 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:43.354 09:58:13 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:43.354 09:58:13 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:43.354 09:58:13 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:43.354 09:58:13 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:43.354 09:58:13 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:43.354 09:58:13 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:43.354 INFO: JSON configuration test init 00:05:43.354 09:58:13 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:43.354 09:58:13 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:43.354 09:58:13 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:43.354 09:58:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:43.354 09:58:13 -- common/autotest_common.sh@10 -- # set +x 00:05:43.354 09:58:13 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:43.354 09:58:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:43.354 09:58:13 -- common/autotest_common.sh@10 -- # set +x 00:05:43.354 09:58:13 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:43.354 09:58:13 -- json_config/json_config.sh@98 -- # local app=target 00:05:43.354 09:58:13 -- json_config/json_config.sh@99 -- # shift 00:05:43.354 09:58:13 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:43.354 09:58:13 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:43.354 09:58:13 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:43.354 09:58:13 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:43.354 09:58:13 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:43.354 09:58:13 -- json_config/json_config.sh@111 -- # app_pid[$app]=67812 00:05:43.354 Waiting for target to run... 00:05:43.354 09:58:13 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:43.354 09:58:13 -- json_config/json_config.sh@114 -- # waitforlisten 67812 /var/tmp/spdk_tgt.sock 00:05:43.354 09:58:13 -- common/autotest_common.sh@819 -- # '[' -z 67812 ']' 00:05:43.354 09:58:13 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:43.354 09:58:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:43.354 09:58:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:43.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:43.354 09:58:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:43.354 09:58:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:43.354 09:58:13 -- common/autotest_common.sh@10 -- # set +x 00:05:43.354 [2024-07-14 09:58:13.730907] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:43.354 [2024-07-14 09:58:13.731815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67812 ] 00:05:43.921 [2024-07-14 09:58:14.183643] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.921 [2024-07-14 09:58:14.242899] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:43.921 [2024-07-14 09:58:14.243069] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.179 09:58:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:44.179 09:58:14 -- common/autotest_common.sh@852 -- # return 0 00:05:44.179 00:05:44.179 09:58:14 -- json_config/json_config.sh@115 -- # echo '' 00:05:44.179 09:58:14 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:44.179 09:58:14 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:44.179 09:58:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:44.179 09:58:14 -- common/autotest_common.sh@10 -- # set +x 00:05:44.179 09:58:14 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:44.179 09:58:14 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:44.179 09:58:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:44.179 09:58:14 -- common/autotest_common.sh@10 -- # set +x 00:05:44.437 09:58:14 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:44.437 09:58:14 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:44.437 09:58:14 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:44.696 09:58:15 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:44.696 09:58:15 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:44.696 09:58:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:44.696 09:58:15 -- common/autotest_common.sh@10 -- # set +x 00:05:44.696 09:58:15 -- json_config/json_config.sh@48 -- # local ret=0 00:05:44.696 09:58:15 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:44.696 09:58:15 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:44.696 09:58:15 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:44.696 09:58:15 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:44.696 09:58:15 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:44.956 09:58:15 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:44.956 09:58:15 -- json_config/json_config.sh@51 -- # local get_types 00:05:44.956 09:58:15 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:44.956 09:58:15 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:44.956 09:58:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:44.956 09:58:15 -- common/autotest_common.sh@10 -- # set +x 00:05:44.956 09:58:15 -- json_config/json_config.sh@58 -- # return 0 00:05:44.956 09:58:15 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:44.956 09:58:15 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:44.956 09:58:15 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:44.956 09:58:15 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:44.956 09:58:15 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:44.956 09:58:15 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:44.956 09:58:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:44.956 09:58:15 -- common/autotest_common.sh@10 -- # set +x 00:05:44.956 09:58:15 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:44.956 09:58:15 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:44.956 09:58:15 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:44.956 09:58:15 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:44.956 09:58:15 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:45.215 MallocForNvmf0 00:05:45.215 09:58:15 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:45.215 09:58:15 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:45.473 MallocForNvmf1 00:05:45.473 09:58:15 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:45.473 09:58:15 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:45.732 [2024-07-14 09:58:16.134961] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:45.732 09:58:16 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:45.732 09:58:16 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:45.991 09:58:16 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:45.991 09:58:16 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:46.250 09:58:16 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:46.250 09:58:16 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:46.508 09:58:16 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:46.508 09:58:16 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:46.767 [2024-07-14 09:58:17.067655] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:46.767 09:58:17 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:46.767 09:58:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:46.767 09:58:17 -- common/autotest_common.sh@10 -- # set +x 00:05:46.767 09:58:17 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:46.767 09:58:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:46.767 09:58:17 -- common/autotest_common.sh@10 -- # set +x 00:05:46.767 09:58:17 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:46.767 09:58:17 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:46.767 09:58:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:47.027 MallocBdevForConfigChangeCheck 00:05:47.027 09:58:17 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:47.027 09:58:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:47.027 09:58:17 -- common/autotest_common.sh@10 -- # set +x 00:05:47.027 09:58:17 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:47.027 09:58:17 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:47.594 INFO: shutting down applications... 00:05:47.594 09:58:17 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:47.594 09:58:17 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:47.594 09:58:17 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:47.594 09:58:17 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:47.594 09:58:17 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:47.594 Calling clear_iscsi_subsystem 00:05:47.594 Calling clear_nvmf_subsystem 00:05:47.594 Calling clear_nbd_subsystem 00:05:47.594 Calling clear_ublk_subsystem 00:05:47.594 Calling clear_vhost_blk_subsystem 00:05:47.594 Calling clear_vhost_scsi_subsystem 00:05:47.594 Calling clear_scheduler_subsystem 00:05:47.594 Calling clear_bdev_subsystem 00:05:47.594 Calling clear_accel_subsystem 00:05:47.594 Calling clear_vmd_subsystem 00:05:47.594 Calling clear_sock_subsystem 00:05:47.594 Calling clear_iobuf_subsystem 00:05:47.594 09:58:18 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:47.594 09:58:18 -- json_config/json_config.sh@396 -- # count=100 00:05:47.594 09:58:18 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:47.594 09:58:18 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:47.594 09:58:18 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:47.594 09:58:18 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:48.160 09:58:18 -- json_config/json_config.sh@398 -- # break 00:05:48.160 09:58:18 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:48.160 09:58:18 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:48.160 09:58:18 -- json_config/json_config.sh@120 -- # local app=target 00:05:48.160 09:58:18 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:48.161 09:58:18 -- json_config/json_config.sh@124 -- # [[ -n 67812 ]] 00:05:48.161 09:58:18 -- json_config/json_config.sh@127 -- # kill -SIGINT 67812 00:05:48.161 09:58:18 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:48.161 09:58:18 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:48.161 09:58:18 -- json_config/json_config.sh@130 -- # kill -0 67812 00:05:48.161 09:58:18 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:48.727 09:58:18 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:48.727 09:58:18 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:48.727 09:58:18 -- json_config/json_config.sh@130 -- # kill -0 67812 00:05:48.727 09:58:18 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:48.727 09:58:18 -- json_config/json_config.sh@132 -- # break 00:05:48.727 09:58:18 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:48.727 SPDK target shutdown done 00:05:48.727 09:58:18 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:48.727 INFO: relaunching applications... 00:05:48.727 09:58:18 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:48.727 09:58:18 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:48.727 09:58:18 -- json_config/json_config.sh@98 -- # local app=target 00:05:48.727 09:58:18 -- json_config/json_config.sh@99 -- # shift 00:05:48.727 09:58:18 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:48.727 09:58:18 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:48.727 09:58:18 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:48.727 09:58:18 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:48.727 09:58:18 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:48.727 09:58:18 -- json_config/json_config.sh@111 -- # app_pid[$app]=68087 00:05:48.727 09:58:18 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:48.727 Waiting for target to run... 00:05:48.727 09:58:18 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:48.727 09:58:18 -- json_config/json_config.sh@114 -- # waitforlisten 68087 /var/tmp/spdk_tgt.sock 00:05:48.727 09:58:18 -- common/autotest_common.sh@819 -- # '[' -z 68087 ']' 00:05:48.727 09:58:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:48.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:48.727 09:58:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:48.727 09:58:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:48.727 09:58:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:48.727 09:58:18 -- common/autotest_common.sh@10 -- # set +x 00:05:48.727 [2024-07-14 09:58:19.026654] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:48.727 [2024-07-14 09:58:19.026722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68087 ] 00:05:48.990 [2024-07-14 09:58:19.432740] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.990 [2024-07-14 09:58:19.485495] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:48.990 [2024-07-14 09:58:19.485652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.257 [2024-07-14 09:58:19.785872] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:49.515 [2024-07-14 09:58:19.817975] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:49.515 09:58:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:49.515 00:05:49.515 09:58:19 -- common/autotest_common.sh@852 -- # return 0 00:05:49.515 09:58:19 -- json_config/json_config.sh@115 -- # echo '' 00:05:49.515 09:58:19 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:49.515 INFO: Checking if target configuration is the same... 00:05:49.515 09:58:19 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:49.515 09:58:19 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:49.515 09:58:19 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:49.515 09:58:19 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:49.515 + '[' 2 -ne 2 ']' 00:05:49.515 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:49.515 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:49.515 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:49.515 +++ basename /dev/fd/62 00:05:49.515 ++ mktemp /tmp/62.XXX 00:05:49.515 + tmp_file_1=/tmp/62.yuG 00:05:49.515 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:49.515 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:49.515 + tmp_file_2=/tmp/spdk_tgt_config.json.alG 00:05:49.515 + ret=0 00:05:49.515 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:50.081 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:50.081 + diff -u /tmp/62.yuG /tmp/spdk_tgt_config.json.alG 00:05:50.081 INFO: JSON config files are the same 00:05:50.081 + echo 'INFO: JSON config files are the same' 00:05:50.081 + rm /tmp/62.yuG /tmp/spdk_tgt_config.json.alG 00:05:50.081 + exit 0 00:05:50.081 09:58:20 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:50.081 INFO: changing configuration and checking if this can be detected... 00:05:50.081 09:58:20 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:50.081 09:58:20 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:50.081 09:58:20 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:50.339 09:58:20 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:50.340 09:58:20 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:50.340 09:58:20 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:50.340 + '[' 2 -ne 2 ']' 00:05:50.340 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:50.340 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:50.340 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:50.340 +++ basename /dev/fd/62 00:05:50.340 ++ mktemp /tmp/62.XXX 00:05:50.340 + tmp_file_1=/tmp/62.jEY 00:05:50.340 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:50.340 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:50.340 + tmp_file_2=/tmp/spdk_tgt_config.json.s0H 00:05:50.340 + ret=0 00:05:50.340 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:50.598 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:50.598 + diff -u /tmp/62.jEY /tmp/spdk_tgt_config.json.s0H 00:05:50.598 + ret=1 00:05:50.598 + echo '=== Start of file: /tmp/62.jEY ===' 00:05:50.598 + cat /tmp/62.jEY 00:05:50.598 + echo '=== End of file: /tmp/62.jEY ===' 00:05:50.598 + echo '' 00:05:50.598 + echo '=== Start of file: /tmp/spdk_tgt_config.json.s0H ===' 00:05:50.598 + cat /tmp/spdk_tgt_config.json.s0H 00:05:50.598 + echo '=== End of file: /tmp/spdk_tgt_config.json.s0H ===' 00:05:50.598 + echo '' 00:05:50.598 + rm /tmp/62.jEY /tmp/spdk_tgt_config.json.s0H 00:05:50.598 + exit 1 00:05:50.598 INFO: configuration change detected. 00:05:50.598 09:58:21 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:50.598 09:58:21 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:50.598 09:58:21 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:50.598 09:58:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:50.598 09:58:21 -- common/autotest_common.sh@10 -- # set +x 00:05:50.598 09:58:21 -- json_config/json_config.sh@360 -- # local ret=0 00:05:50.598 09:58:21 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:50.598 09:58:21 -- json_config/json_config.sh@370 -- # [[ -n 68087 ]] 00:05:50.598 09:58:21 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:50.598 09:58:21 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:50.598 09:58:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:50.598 09:58:21 -- common/autotest_common.sh@10 -- # set +x 00:05:50.598 09:58:21 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:50.598 09:58:21 -- json_config/json_config.sh@246 -- # uname -s 00:05:50.598 09:58:21 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:50.598 09:58:21 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:50.598 09:58:21 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:50.598 09:58:21 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:50.598 09:58:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:50.598 09:58:21 -- common/autotest_common.sh@10 -- # set +x 00:05:50.858 09:58:21 -- json_config/json_config.sh@376 -- # killprocess 68087 00:05:50.858 09:58:21 -- common/autotest_common.sh@926 -- # '[' -z 68087 ']' 00:05:50.858 09:58:21 -- common/autotest_common.sh@930 -- # kill -0 68087 00:05:50.858 09:58:21 -- common/autotest_common.sh@931 -- # uname 00:05:50.858 09:58:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:50.858 09:58:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68087 00:05:50.858 killing process with pid 68087 00:05:50.858 09:58:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:50.858 09:58:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:50.858 09:58:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68087' 00:05:50.858 09:58:21 -- common/autotest_common.sh@945 -- # kill 68087 00:05:50.858 09:58:21 -- common/autotest_common.sh@950 -- # wait 68087 00:05:51.117 09:58:21 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:51.117 09:58:21 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:51.117 09:58:21 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:51.117 09:58:21 -- common/autotest_common.sh@10 -- # set +x 00:05:51.117 INFO: Success 00:05:51.117 09:58:21 -- json_config/json_config.sh@381 -- # return 0 00:05:51.117 09:58:21 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:51.117 ************************************ 00:05:51.117 END TEST json_config 00:05:51.117 ************************************ 00:05:51.117 00:05:51.117 real 0m7.871s 00:05:51.117 user 0m10.990s 00:05:51.117 sys 0m1.860s 00:05:51.117 09:58:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.117 09:58:21 -- common/autotest_common.sh@10 -- # set +x 00:05:51.117 09:58:21 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:51.117 09:58:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:51.117 09:58:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:51.117 09:58:21 -- common/autotest_common.sh@10 -- # set +x 00:05:51.117 ************************************ 00:05:51.117 START TEST json_config_extra_key 00:05:51.117 ************************************ 00:05:51.117 09:58:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:51.117 09:58:21 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:51.117 09:58:21 -- nvmf/common.sh@7 -- # uname -s 00:05:51.117 09:58:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:51.118 09:58:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:51.118 09:58:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:51.118 09:58:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:51.118 09:58:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:51.118 09:58:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:51.118 09:58:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:51.118 09:58:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:51.118 09:58:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:51.118 09:58:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:51.118 09:58:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:05:51.118 09:58:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:05:51.118 09:58:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:51.118 09:58:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:51.118 09:58:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:51.118 09:58:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:51.118 09:58:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:51.118 09:58:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.118 09:58:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.118 09:58:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.118 09:58:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.118 09:58:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.118 09:58:21 -- paths/export.sh@5 -- # export PATH 00:05:51.118 09:58:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.118 09:58:21 -- nvmf/common.sh@46 -- # : 0 00:05:51.118 09:58:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:51.118 09:58:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:51.118 09:58:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:51.118 09:58:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:51.118 09:58:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:51.118 09:58:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:51.118 09:58:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:51.118 09:58:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:51.118 09:58:21 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:51.118 09:58:21 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:51.118 09:58:21 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:51.118 09:58:21 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:51.118 09:58:21 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:51.118 09:58:21 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:51.118 09:58:21 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:51.118 09:58:21 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:51.118 09:58:21 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:51.118 09:58:21 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:51.118 INFO: launching applications... 00:05:51.118 09:58:21 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:51.118 09:58:21 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:51.118 09:58:21 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:51.118 09:58:21 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:51.118 09:58:21 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:51.118 09:58:21 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=68251 00:05:51.118 09:58:21 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:51.118 Waiting for target to run... 00:05:51.118 09:58:21 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:51.118 09:58:21 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 68251 /var/tmp/spdk_tgt.sock 00:05:51.118 09:58:21 -- common/autotest_common.sh@819 -- # '[' -z 68251 ']' 00:05:51.118 09:58:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:51.118 09:58:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:51.118 09:58:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:51.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:51.118 09:58:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:51.118 09:58:21 -- common/autotest_common.sh@10 -- # set +x 00:05:51.118 [2024-07-14 09:58:21.639950] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:51.118 [2024-07-14 09:58:21.640786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68251 ] 00:05:51.685 [2024-07-14 09:58:22.155628] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.685 [2024-07-14 09:58:22.220329] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:51.685 [2024-07-14 09:58:22.220499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.251 00:05:52.251 INFO: shutting down applications... 00:05:52.251 09:58:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:52.251 09:58:22 -- common/autotest_common.sh@852 -- # return 0 00:05:52.251 09:58:22 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:52.251 09:58:22 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:52.251 09:58:22 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:52.251 09:58:22 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:52.251 09:58:22 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:52.251 09:58:22 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 68251 ]] 00:05:52.251 09:58:22 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 68251 00:05:52.251 09:58:22 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:52.251 09:58:22 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:52.251 09:58:22 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68251 00:05:52.251 09:58:22 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:52.827 09:58:23 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:52.827 09:58:23 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:52.827 09:58:23 -- json_config/json_config_extra_key.sh@50 -- # kill -0 68251 00:05:52.827 09:58:23 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:52.827 09:58:23 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:52.827 09:58:23 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:52.827 09:58:23 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:52.827 SPDK target shutdown done 00:05:52.827 09:58:23 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:52.827 Success 00:05:52.827 00:05:52.827 real 0m1.638s 00:05:52.827 user 0m1.403s 00:05:52.827 sys 0m0.541s 00:05:52.827 ************************************ 00:05:52.827 END TEST json_config_extra_key 00:05:52.827 ************************************ 00:05:52.827 09:58:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.827 09:58:23 -- common/autotest_common.sh@10 -- # set +x 00:05:52.827 09:58:23 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:52.827 09:58:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:52.827 09:58:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.827 09:58:23 -- common/autotest_common.sh@10 -- # set +x 00:05:52.827 ************************************ 00:05:52.827 START TEST alias_rpc 00:05:52.827 ************************************ 00:05:52.827 09:58:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:52.827 * Looking for test storage... 00:05:52.827 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:52.827 09:58:23 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:52.827 09:58:23 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=68332 00:05:52.828 09:58:23 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 68332 00:05:52.828 09:58:23 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:52.828 09:58:23 -- common/autotest_common.sh@819 -- # '[' -z 68332 ']' 00:05:52.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.828 09:58:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.828 09:58:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:52.828 09:58:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.828 09:58:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:52.828 09:58:23 -- common/autotest_common.sh@10 -- # set +x 00:05:52.828 [2024-07-14 09:58:23.355721] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:52.828 [2024-07-14 09:58:23.356077] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68332 ] 00:05:53.087 [2024-07-14 09:58:23.495803] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.087 [2024-07-14 09:58:23.564309] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:53.087 [2024-07-14 09:58:23.564466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.021 09:58:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:54.021 09:58:24 -- common/autotest_common.sh@852 -- # return 0 00:05:54.021 09:58:24 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:54.279 09:58:24 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 68332 00:05:54.279 09:58:24 -- common/autotest_common.sh@926 -- # '[' -z 68332 ']' 00:05:54.279 09:58:24 -- common/autotest_common.sh@930 -- # kill -0 68332 00:05:54.279 09:58:24 -- common/autotest_common.sh@931 -- # uname 00:05:54.279 09:58:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:54.279 09:58:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68332 00:05:54.279 killing process with pid 68332 00:05:54.279 09:58:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:54.279 09:58:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:54.279 09:58:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68332' 00:05:54.279 09:58:24 -- common/autotest_common.sh@945 -- # kill 68332 00:05:54.279 09:58:24 -- common/autotest_common.sh@950 -- # wait 68332 00:05:54.537 ************************************ 00:05:54.537 END TEST alias_rpc 00:05:54.537 ************************************ 00:05:54.537 00:05:54.537 real 0m1.772s 00:05:54.537 user 0m2.016s 00:05:54.537 sys 0m0.437s 00:05:54.537 09:58:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.537 09:58:24 -- common/autotest_common.sh@10 -- # set +x 00:05:54.537 09:58:25 -- spdk/autotest.sh@182 -- # [[ 1 -eq 0 ]] 00:05:54.537 09:58:25 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:54.537 09:58:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:54.537 09:58:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:54.537 09:58:25 -- common/autotest_common.sh@10 -- # set +x 00:05:54.537 ************************************ 00:05:54.537 START TEST dpdk_mem_utility 00:05:54.537 ************************************ 00:05:54.537 09:58:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:54.795 * Looking for test storage... 00:05:54.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.795 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:54.795 09:58:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:54.795 09:58:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=68423 00:05:54.795 09:58:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 68423 00:05:54.795 09:58:25 -- common/autotest_common.sh@819 -- # '[' -z 68423 ']' 00:05:54.795 09:58:25 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:54.795 09:58:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.795 09:58:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:54.796 09:58:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.796 09:58:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:54.796 09:58:25 -- common/autotest_common.sh@10 -- # set +x 00:05:54.796 [2024-07-14 09:58:25.166286] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:54.796 [2024-07-14 09:58:25.166418] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68423 ] 00:05:54.796 [2024-07-14 09:58:25.304510] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.053 [2024-07-14 09:58:25.372339] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:55.053 [2024-07-14 09:58:25.372494] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.620 09:58:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:55.620 09:58:26 -- common/autotest_common.sh@852 -- # return 0 00:05:55.620 09:58:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:55.620 09:58:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:55.620 09:58:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:55.620 09:58:26 -- common/autotest_common.sh@10 -- # set +x 00:05:55.620 { 00:05:55.620 "filename": "/tmp/spdk_mem_dump.txt" 00:05:55.620 } 00:05:55.620 09:58:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:55.620 09:58:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:55.879 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:55.879 1 heaps totaling size 814.000000 MiB 00:05:55.879 size: 814.000000 MiB heap id: 0 00:05:55.879 end heaps---------- 00:05:55.879 8 mempools totaling size 598.116089 MiB 00:05:55.879 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:55.879 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:55.879 size: 84.521057 MiB name: bdev_io_68423 00:05:55.879 size: 51.011292 MiB name: evtpool_68423 00:05:55.879 size: 50.003479 MiB name: msgpool_68423 00:05:55.879 size: 21.763794 MiB name: PDU_Pool 00:05:55.879 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:55.879 size: 0.026123 MiB name: Session_Pool 00:05:55.879 end mempools------- 00:05:55.879 6 memzones totaling size 4.142822 MiB 00:05:55.879 size: 1.000366 MiB name: RG_ring_0_68423 00:05:55.879 size: 1.000366 MiB name: RG_ring_1_68423 00:05:55.879 size: 1.000366 MiB name: RG_ring_4_68423 00:05:55.880 size: 1.000366 MiB name: RG_ring_5_68423 00:05:55.880 size: 0.125366 MiB name: RG_ring_2_68423 00:05:55.880 size: 0.015991 MiB name: RG_ring_3_68423 00:05:55.880 end memzones------- 00:05:55.880 09:58:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:55.880 heap id: 0 total size: 814.000000 MiB number of busy elements: 214 number of free elements: 15 00:05:55.880 list of free elements. size: 12.487671 MiB 00:05:55.880 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:55.880 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:55.880 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:55.880 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:55.880 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:55.880 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:55.880 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:55.880 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:55.880 element at address: 0x200000200000 with size: 0.837219 MiB 00:05:55.880 element at address: 0x20001aa00000 with size: 0.572632 MiB 00:05:55.880 element at address: 0x20000b200000 with size: 0.489990 MiB 00:05:55.880 element at address: 0x200000800000 with size: 0.487061 MiB 00:05:55.880 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:55.880 element at address: 0x200027e00000 with size: 0.398315 MiB 00:05:55.880 element at address: 0x200003a00000 with size: 0.351685 MiB 00:05:55.880 list of standard malloc elements. size: 199.249756 MiB 00:05:55.880 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:55.880 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:55.880 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:55.880 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:55.880 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:55.880 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:55.880 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:55.880 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:55.880 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:55.880 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:55.880 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:55.880 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:55.880 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:55.880 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:55.880 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:55.880 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:55.880 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:55.880 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:55.880 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:55.880 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:55.880 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:55.880 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:55.880 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:55.880 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:55.880 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:55.880 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:55.880 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:55.880 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:55.880 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:55.880 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:55.880 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:55.880 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:55.880 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:55.880 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:55.880 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:55.880 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:55.880 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:55.880 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:55.880 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:55.880 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:55.881 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e65f80 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e66040 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6cc40 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:55.881 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:55.881 list of memzone associated elements. size: 602.262573 MiB 00:05:55.881 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:55.881 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:55.881 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:55.881 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:55.881 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:55.881 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_68423_0 00:05:55.881 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:55.881 associated memzone info: size: 48.002930 MiB name: MP_evtpool_68423_0 00:05:55.881 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:55.881 associated memzone info: size: 48.002930 MiB name: MP_msgpool_68423_0 00:05:55.881 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:55.881 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:55.881 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:55.881 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:55.881 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:55.881 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_68423 00:05:55.881 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:55.881 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_68423 00:05:55.881 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:55.881 associated memzone info: size: 1.007996 MiB name: MP_evtpool_68423 00:05:55.881 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:55.881 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:55.881 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:55.881 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:55.882 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:55.882 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:55.882 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:55.882 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:55.882 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:55.882 associated memzone info: size: 1.000366 MiB name: RG_ring_0_68423 00:05:55.882 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:55.882 associated memzone info: size: 1.000366 MiB name: RG_ring_1_68423 00:05:55.882 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:55.882 associated memzone info: size: 1.000366 MiB name: RG_ring_4_68423 00:05:55.882 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:55.882 associated memzone info: size: 1.000366 MiB name: RG_ring_5_68423 00:05:55.882 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:55.882 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_68423 00:05:55.882 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:55.882 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:55.882 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:55.882 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:55.882 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:55.882 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:55.882 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:55.882 associated memzone info: size: 0.125366 MiB name: RG_ring_2_68423 00:05:55.882 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:55.882 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:55.882 element at address: 0x200027e66100 with size: 0.023743 MiB 00:05:55.882 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:55.882 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:55.882 associated memzone info: size: 0.015991 MiB name: RG_ring_3_68423 00:05:55.882 element at address: 0x200027e6c240 with size: 0.002441 MiB 00:05:55.882 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:55.882 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:05:55.882 associated memzone info: size: 0.000183 MiB name: MP_msgpool_68423 00:05:55.882 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:55.882 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_68423 00:05:55.882 element at address: 0x200027e6cd00 with size: 0.000305 MiB 00:05:55.882 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:55.882 09:58:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:55.882 09:58:26 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 68423 00:05:55.882 09:58:26 -- common/autotest_common.sh@926 -- # '[' -z 68423 ']' 00:05:55.882 09:58:26 -- common/autotest_common.sh@930 -- # kill -0 68423 00:05:55.882 09:58:26 -- common/autotest_common.sh@931 -- # uname 00:05:55.882 09:58:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:55.882 09:58:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68423 00:05:55.882 09:58:26 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:55.882 09:58:26 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:55.882 09:58:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68423' 00:05:55.882 killing process with pid 68423 00:05:55.882 09:58:26 -- common/autotest_common.sh@945 -- # kill 68423 00:05:55.882 09:58:26 -- common/autotest_common.sh@950 -- # wait 68423 00:05:56.140 00:05:56.140 real 0m1.638s 00:05:56.140 user 0m1.761s 00:05:56.140 sys 0m0.441s 00:05:56.140 09:58:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.140 ************************************ 00:05:56.140 END TEST dpdk_mem_utility 00:05:56.140 ************************************ 00:05:56.140 09:58:26 -- common/autotest_common.sh@10 -- # set +x 00:05:56.399 09:58:26 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:56.399 09:58:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:56.399 09:58:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:56.399 09:58:26 -- common/autotest_common.sh@10 -- # set +x 00:05:56.399 ************************************ 00:05:56.399 START TEST event 00:05:56.399 ************************************ 00:05:56.399 09:58:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:56.399 * Looking for test storage... 00:05:56.399 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:56.399 09:58:26 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:56.399 09:58:26 -- bdev/nbd_common.sh@6 -- # set -e 00:05:56.399 09:58:26 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:56.399 09:58:26 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:56.399 09:58:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:56.399 09:58:26 -- common/autotest_common.sh@10 -- # set +x 00:05:56.399 ************************************ 00:05:56.399 START TEST event_perf 00:05:56.399 ************************************ 00:05:56.399 09:58:26 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:56.399 Running I/O for 1 seconds...[2024-07-14 09:58:26.823544] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:56.399 [2024-07-14 09:58:26.823989] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68506 ] 00:05:56.658 [2024-07-14 09:58:26.952770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:56.658 [2024-07-14 09:58:27.042435] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.658 [2024-07-14 09:58:27.042562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.658 [2024-07-14 09:58:27.042728] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.658 Running I/O for 1 seconds...[2024-07-14 09:58:27.042730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:57.593 00:05:57.593 lcore 0: 112640 00:05:57.593 lcore 1: 112638 00:05:57.593 lcore 2: 112641 00:05:57.593 lcore 3: 112638 00:05:57.593 done. 00:05:57.593 00:05:57.593 real 0m1.296s 00:05:57.593 user 0m4.110s 00:05:57.593 sys 0m0.060s 00:05:57.593 09:58:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.593 09:58:28 -- common/autotest_common.sh@10 -- # set +x 00:05:57.593 ************************************ 00:05:57.593 END TEST event_perf 00:05:57.593 ************************************ 00:05:57.852 09:58:28 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:57.852 09:58:28 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:57.852 09:58:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.852 09:58:28 -- common/autotest_common.sh@10 -- # set +x 00:05:57.852 ************************************ 00:05:57.852 START TEST event_reactor 00:05:57.852 ************************************ 00:05:57.852 09:58:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:57.852 [2024-07-14 09:58:28.182339] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:57.852 [2024-07-14 09:58:28.182468] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68550 ] 00:05:57.852 [2024-07-14 09:58:28.330305] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.852 [2024-07-14 09:58:28.391118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.228 test_start 00:05:59.228 oneshot 00:05:59.228 tick 100 00:05:59.228 tick 100 00:05:59.228 tick 250 00:05:59.228 tick 100 00:05:59.228 tick 100 00:05:59.228 tick 100 00:05:59.228 tick 250 00:05:59.228 tick 500 00:05:59.228 tick 100 00:05:59.228 tick 100 00:05:59.228 tick 250 00:05:59.228 tick 100 00:05:59.228 tick 100 00:05:59.228 test_end 00:05:59.228 00:05:59.228 real 0m1.282s 00:05:59.228 user 0m1.122s 00:05:59.228 sys 0m0.055s 00:05:59.228 09:58:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.228 ************************************ 00:05:59.228 09:58:29 -- common/autotest_common.sh@10 -- # set +x 00:05:59.228 END TEST event_reactor 00:05:59.228 ************************************ 00:05:59.228 09:58:29 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:59.228 09:58:29 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:59.228 09:58:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:59.228 09:58:29 -- common/autotest_common.sh@10 -- # set +x 00:05:59.228 ************************************ 00:05:59.228 START TEST event_reactor_perf 00:05:59.228 ************************************ 00:05:59.228 09:58:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:59.228 [2024-07-14 09:58:29.512973] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:05:59.228 [2024-07-14 09:58:29.513056] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68580 ] 00:05:59.228 [2024-07-14 09:58:29.636125] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.228 [2024-07-14 09:58:29.686680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.606 test_start 00:06:00.606 test_end 00:06:00.606 Performance: 449084 events per second 00:06:00.606 00:06:00.606 real 0m1.247s 00:06:00.606 user 0m1.092s 00:06:00.606 sys 0m0.049s 00:06:00.606 09:58:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.606 09:58:30 -- common/autotest_common.sh@10 -- # set +x 00:06:00.606 ************************************ 00:06:00.606 END TEST event_reactor_perf 00:06:00.606 ************************************ 00:06:00.606 09:58:30 -- event/event.sh@49 -- # uname -s 00:06:00.606 09:58:30 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:00.606 09:58:30 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:00.606 09:58:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:00.606 09:58:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:00.606 09:58:30 -- common/autotest_common.sh@10 -- # set +x 00:06:00.606 ************************************ 00:06:00.606 START TEST event_scheduler 00:06:00.606 ************************************ 00:06:00.606 09:58:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:00.606 * Looking for test storage... 00:06:00.606 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:00.606 09:58:30 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:00.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.606 09:58:30 -- scheduler/scheduler.sh@35 -- # scheduler_pid=68635 00:06:00.606 09:58:30 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.606 09:58:30 -- scheduler/scheduler.sh@37 -- # waitforlisten 68635 00:06:00.606 09:58:30 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:00.606 09:58:30 -- common/autotest_common.sh@819 -- # '[' -z 68635 ']' 00:06:00.606 09:58:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.606 09:58:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:00.606 09:58:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.606 09:58:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:00.606 09:58:30 -- common/autotest_common.sh@10 -- # set +x 00:06:00.606 [2024-07-14 09:58:30.935731] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:00.606 [2024-07-14 09:58:30.935830] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68635 ] 00:06:00.606 [2024-07-14 09:58:31.076990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:00.865 [2024-07-14 09:58:31.162843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.865 [2024-07-14 09:58:31.162961] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.865 [2024-07-14 09:58:31.163100] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:00.865 [2024-07-14 09:58:31.163114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.432 09:58:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:01.432 09:58:31 -- common/autotest_common.sh@852 -- # return 0 00:06:01.432 09:58:31 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:01.432 09:58:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:01.432 09:58:31 -- common/autotest_common.sh@10 -- # set +x 00:06:01.432 POWER: Env isn't set yet! 00:06:01.432 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:01.432 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:01.432 POWER: Cannot set governor of lcore 0 to userspace 00:06:01.432 POWER: Attempting to initialise PSTAT power management... 00:06:01.432 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:01.432 POWER: Cannot set governor of lcore 0 to performance 00:06:01.432 POWER: Attempting to initialise AMD PSTATE power management... 00:06:01.432 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:01.432 POWER: Cannot set governor of lcore 0 to userspace 00:06:01.432 POWER: Attempting to initialise CPPC power management... 00:06:01.432 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:01.432 POWER: Cannot set governor of lcore 0 to userspace 00:06:01.432 POWER: Attempting to initialise VM power management... 00:06:01.432 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:01.432 POWER: Unable to set Power Management Environment for lcore 0 00:06:01.432 [2024-07-14 09:58:31.925929] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:01.432 [2024-07-14 09:58:31.925944] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:01.432 [2024-07-14 09:58:31.925953] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:01.432 [2024-07-14 09:58:31.925967] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:01.432 [2024-07-14 09:58:31.925976] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:01.432 [2024-07-14 09:58:31.925984] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:01.432 09:58:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:01.432 09:58:31 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:01.432 09:58:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:01.432 09:58:31 -- common/autotest_common.sh@10 -- # set +x 00:06:01.692 [2024-07-14 09:58:32.015402] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:01.692 09:58:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:01.692 09:58:32 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:01.692 09:58:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:01.692 09:58:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:01.692 09:58:32 -- common/autotest_common.sh@10 -- # set +x 00:06:01.692 ************************************ 00:06:01.692 START TEST scheduler_create_thread 00:06:01.692 ************************************ 00:06:01.692 09:58:32 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:06:01.692 09:58:32 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:01.692 09:58:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:01.692 09:58:32 -- common/autotest_common.sh@10 -- # set +x 00:06:01.692 2 00:06:01.692 09:58:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:01.692 09:58:32 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:01.692 09:58:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:01.692 09:58:32 -- common/autotest_common.sh@10 -- # set +x 00:06:01.692 3 00:06:01.692 09:58:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:01.692 09:58:32 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:01.692 09:58:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:01.692 09:58:32 -- common/autotest_common.sh@10 -- # set +x 00:06:01.692 4 00:06:01.692 09:58:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:01.692 09:58:32 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:01.692 09:58:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:01.692 09:58:32 -- common/autotest_common.sh@10 -- # set +x 00:06:01.692 5 00:06:01.692 09:58:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:01.692 09:58:32 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:01.693 09:58:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:01.693 09:58:32 -- common/autotest_common.sh@10 -- # set +x 00:06:01.693 6 00:06:01.693 09:58:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:01.693 09:58:32 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:01.693 09:58:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:01.693 09:58:32 -- common/autotest_common.sh@10 -- # set +x 00:06:01.693 7 00:06:01.693 09:58:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:01.693 09:58:32 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:01.693 09:58:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:01.693 09:58:32 -- common/autotest_common.sh@10 -- # set +x 00:06:01.693 8 00:06:01.693 09:58:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:01.693 09:58:32 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:01.693 09:58:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:01.693 09:58:32 -- common/autotest_common.sh@10 -- # set +x 00:06:01.693 9 00:06:01.693 09:58:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:01.693 09:58:32 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:01.693 09:58:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:01.693 09:58:32 -- common/autotest_common.sh@10 -- # set +x 00:06:01.693 10 00:06:01.693 09:58:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:01.693 09:58:32 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:01.693 09:58:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:01.693 09:58:32 -- common/autotest_common.sh@10 -- # set +x 00:06:01.693 09:58:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:01.693 09:58:32 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:01.693 09:58:32 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:01.693 09:58:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:01.693 09:58:32 -- common/autotest_common.sh@10 -- # set +x 00:06:02.259 09:58:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:02.259 09:58:32 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:02.259 09:58:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:02.259 09:58:32 -- common/autotest_common.sh@10 -- # set +x 00:06:03.631 09:58:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.631 09:58:34 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:03.631 09:58:34 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:03.631 09:58:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.631 09:58:34 -- common/autotest_common.sh@10 -- # set +x 00:06:05.006 ************************************ 00:06:05.006 END TEST scheduler_create_thread 00:06:05.006 ************************************ 00:06:05.006 09:58:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:05.006 00:06:05.006 real 0m3.095s 00:06:05.006 user 0m0.019s 00:06:05.006 sys 0m0.003s 00:06:05.006 09:58:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.006 09:58:35 -- common/autotest_common.sh@10 -- # set +x 00:06:05.006 09:58:35 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:05.006 09:58:35 -- scheduler/scheduler.sh@46 -- # killprocess 68635 00:06:05.006 09:58:35 -- common/autotest_common.sh@926 -- # '[' -z 68635 ']' 00:06:05.006 09:58:35 -- common/autotest_common.sh@930 -- # kill -0 68635 00:06:05.006 09:58:35 -- common/autotest_common.sh@931 -- # uname 00:06:05.006 09:58:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:05.006 09:58:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68635 00:06:05.006 killing process with pid 68635 00:06:05.006 09:58:35 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:05.006 09:58:35 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:05.006 09:58:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68635' 00:06:05.006 09:58:35 -- common/autotest_common.sh@945 -- # kill 68635 00:06:05.006 09:58:35 -- common/autotest_common.sh@950 -- # wait 68635 00:06:05.006 [2024-07-14 09:58:35.500028] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:05.264 ************************************ 00:06:05.264 END TEST event_scheduler 00:06:05.264 ************************************ 00:06:05.264 00:06:05.264 real 0m4.923s 00:06:05.264 user 0m9.697s 00:06:05.264 sys 0m0.392s 00:06:05.264 09:58:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.264 09:58:35 -- common/autotest_common.sh@10 -- # set +x 00:06:05.264 09:58:35 -- event/event.sh@51 -- # modprobe -n nbd 00:06:05.264 09:58:35 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:05.264 09:58:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:05.264 09:58:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:05.264 09:58:35 -- common/autotest_common.sh@10 -- # set +x 00:06:05.264 ************************************ 00:06:05.264 START TEST app_repeat 00:06:05.264 ************************************ 00:06:05.264 09:58:35 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:06:05.264 09:58:35 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.264 09:58:35 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.264 09:58:35 -- event/event.sh@13 -- # local nbd_list 00:06:05.264 09:58:35 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.264 09:58:35 -- event/event.sh@14 -- # local bdev_list 00:06:05.264 09:58:35 -- event/event.sh@15 -- # local repeat_times=4 00:06:05.264 09:58:35 -- event/event.sh@17 -- # modprobe nbd 00:06:05.264 Process app_repeat pid: 68758 00:06:05.264 spdk_app_start Round 0 00:06:05.264 09:58:35 -- event/event.sh@19 -- # repeat_pid=68758 00:06:05.264 09:58:35 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.264 09:58:35 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:05.264 09:58:35 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 68758' 00:06:05.264 09:58:35 -- event/event.sh@23 -- # for i in {0..2} 00:06:05.264 09:58:35 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:05.264 09:58:35 -- event/event.sh@25 -- # waitforlisten 68758 /var/tmp/spdk-nbd.sock 00:06:05.264 09:58:35 -- common/autotest_common.sh@819 -- # '[' -z 68758 ']' 00:06:05.264 09:58:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.264 09:58:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:05.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.264 09:58:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.264 09:58:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:05.264 09:58:35 -- common/autotest_common.sh@10 -- # set +x 00:06:05.523 [2024-07-14 09:58:35.815161] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:05.523 [2024-07-14 09:58:35.815300] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68758 ] 00:06:05.523 [2024-07-14 09:58:35.953866] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.523 [2024-07-14 09:58:36.046777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.523 [2024-07-14 09:58:36.046784] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.459 09:58:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:06.459 09:58:36 -- common/autotest_common.sh@852 -- # return 0 00:06:06.459 09:58:36 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.718 Malloc0 00:06:06.718 09:58:37 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.978 Malloc1 00:06:06.978 09:58:37 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.978 09:58:37 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.978 09:58:37 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.979 09:58:37 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:06.979 09:58:37 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.979 09:58:37 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:06.979 09:58:37 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.979 09:58:37 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.979 09:58:37 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.979 09:58:37 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:06.979 09:58:37 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.979 09:58:37 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:06.979 09:58:37 -- bdev/nbd_common.sh@12 -- # local i 00:06:06.979 09:58:37 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:06.979 09:58:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.979 09:58:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:07.239 /dev/nbd0 00:06:07.239 09:58:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:07.239 09:58:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:07.239 09:58:37 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:07.239 09:58:37 -- common/autotest_common.sh@857 -- # local i 00:06:07.239 09:58:37 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:07.239 09:58:37 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:07.239 09:58:37 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:07.239 09:58:37 -- common/autotest_common.sh@861 -- # break 00:06:07.239 09:58:37 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:07.239 09:58:37 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:07.239 09:58:37 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.239 1+0 records in 00:06:07.239 1+0 records out 00:06:07.239 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000266347 s, 15.4 MB/s 00:06:07.239 09:58:37 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.239 09:58:37 -- common/autotest_common.sh@874 -- # size=4096 00:06:07.239 09:58:37 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.239 09:58:37 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:07.239 09:58:37 -- common/autotest_common.sh@877 -- # return 0 00:06:07.239 09:58:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.239 09:58:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.239 09:58:37 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:07.497 /dev/nbd1 00:06:07.497 09:58:37 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:07.497 09:58:37 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:07.497 09:58:37 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:07.497 09:58:37 -- common/autotest_common.sh@857 -- # local i 00:06:07.497 09:58:37 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:07.497 09:58:37 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:07.497 09:58:37 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:07.497 09:58:37 -- common/autotest_common.sh@861 -- # break 00:06:07.497 09:58:37 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:07.497 09:58:37 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:07.497 09:58:37 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:07.497 1+0 records in 00:06:07.497 1+0 records out 00:06:07.497 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000273142 s, 15.0 MB/s 00:06:07.497 09:58:37 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.497 09:58:37 -- common/autotest_common.sh@874 -- # size=4096 00:06:07.497 09:58:37 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:07.497 09:58:37 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:07.497 09:58:37 -- common/autotest_common.sh@877 -- # return 0 00:06:07.497 09:58:37 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:07.497 09:58:37 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:07.497 09:58:37 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.497 09:58:37 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.497 09:58:37 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.756 09:58:38 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:07.756 { 00:06:07.756 "bdev_name": "Malloc0", 00:06:07.756 "nbd_device": "/dev/nbd0" 00:06:07.756 }, 00:06:07.756 { 00:06:07.756 "bdev_name": "Malloc1", 00:06:07.756 "nbd_device": "/dev/nbd1" 00:06:07.756 } 00:06:07.756 ]' 00:06:07.756 09:58:38 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:07.756 { 00:06:07.756 "bdev_name": "Malloc0", 00:06:07.756 "nbd_device": "/dev/nbd0" 00:06:07.756 }, 00:06:07.756 { 00:06:07.756 "bdev_name": "Malloc1", 00:06:07.756 "nbd_device": "/dev/nbd1" 00:06:07.756 } 00:06:07.756 ]' 00:06:07.756 09:58:38 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.756 09:58:38 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:07.756 /dev/nbd1' 00:06:07.756 09:58:38 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:07.756 /dev/nbd1' 00:06:07.756 09:58:38 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.756 09:58:38 -- bdev/nbd_common.sh@65 -- # count=2 00:06:07.756 09:58:38 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:07.756 09:58:38 -- bdev/nbd_common.sh@95 -- # count=2 00:06:07.756 09:58:38 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:07.756 09:58:38 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:07.756 09:58:38 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.756 09:58:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.756 09:58:38 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:07.756 09:58:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.756 09:58:38 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:07.756 09:58:38 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:07.756 256+0 records in 00:06:07.756 256+0 records out 00:06:07.756 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00789442 s, 133 MB/s 00:06:07.756 09:58:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.756 09:58:38 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:07.756 256+0 records in 00:06:07.756 256+0 records out 00:06:07.756 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261139 s, 40.2 MB/s 00:06:07.756 09:58:38 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.756 09:58:38 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:08.015 256+0 records in 00:06:08.015 256+0 records out 00:06:08.015 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0307137 s, 34.1 MB/s 00:06:08.015 09:58:38 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:08.015 09:58:38 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.015 09:58:38 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:08.015 09:58:38 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:08.015 09:58:38 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:08.015 09:58:38 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:08.015 09:58:38 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:08.015 09:58:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.015 09:58:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:08.016 09:58:38 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:08.016 09:58:38 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:08.016 09:58:38 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:08.016 09:58:38 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:08.016 09:58:38 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.016 09:58:38 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:08.016 09:58:38 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:08.016 09:58:38 -- bdev/nbd_common.sh@51 -- # local i 00:06:08.016 09:58:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.016 09:58:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:08.016 09:58:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:08.275 09:58:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:08.275 09:58:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:08.275 09:58:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.275 09:58:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.275 09:58:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:08.275 09:58:38 -- bdev/nbd_common.sh@41 -- # break 00:06:08.275 09:58:38 -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.275 09:58:38 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:08.275 09:58:38 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:08.534 09:58:38 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:08.534 09:58:38 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:08.534 09:58:38 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:08.534 09:58:38 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:08.534 09:58:38 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:08.534 09:58:38 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:08.534 09:58:38 -- bdev/nbd_common.sh@41 -- # break 00:06:08.534 09:58:38 -- bdev/nbd_common.sh@45 -- # return 0 00:06:08.534 09:58:38 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:08.534 09:58:38 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:08.534 09:58:38 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.534 09:58:39 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:08.534 09:58:39 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:08.534 09:58:39 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.792 09:58:39 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:08.792 09:58:39 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:08.792 09:58:39 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.792 09:58:39 -- bdev/nbd_common.sh@65 -- # true 00:06:08.792 09:58:39 -- bdev/nbd_common.sh@65 -- # count=0 00:06:08.792 09:58:39 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:08.792 09:58:39 -- bdev/nbd_common.sh@104 -- # count=0 00:06:08.792 09:58:39 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:08.792 09:58:39 -- bdev/nbd_common.sh@109 -- # return 0 00:06:08.792 09:58:39 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:09.076 09:58:39 -- event/event.sh@35 -- # sleep 3 00:06:09.076 [2024-07-14 09:58:39.571764] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.335 [2024-07-14 09:58:39.658621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.335 [2024-07-14 09:58:39.658633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.335 [2024-07-14 09:58:39.714175] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:09.335 [2024-07-14 09:58:39.714284] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:11.904 09:58:42 -- event/event.sh@23 -- # for i in {0..2} 00:06:11.904 spdk_app_start Round 1 00:06:11.904 09:58:42 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:11.904 09:58:42 -- event/event.sh@25 -- # waitforlisten 68758 /var/tmp/spdk-nbd.sock 00:06:11.904 09:58:42 -- common/autotest_common.sh@819 -- # '[' -z 68758 ']' 00:06:11.904 09:58:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.904 09:58:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:11.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.904 09:58:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.904 09:58:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:11.904 09:58:42 -- common/autotest_common.sh@10 -- # set +x 00:06:12.163 09:58:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:12.163 09:58:42 -- common/autotest_common.sh@852 -- # return 0 00:06:12.163 09:58:42 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.422 Malloc0 00:06:12.422 09:58:42 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.680 Malloc1 00:06:12.938 09:58:43 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.938 09:58:43 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.938 09:58:43 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.938 09:58:43 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:12.938 09:58:43 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.938 09:58:43 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:12.938 09:58:43 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.938 09:58:43 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.938 09:58:43 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.938 09:58:43 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:12.938 09:58:43 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.938 09:58:43 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:12.938 09:58:43 -- bdev/nbd_common.sh@12 -- # local i 00:06:12.938 09:58:43 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:12.938 09:58:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.938 09:58:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:12.938 /dev/nbd0 00:06:12.938 09:58:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:12.938 09:58:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:12.938 09:58:43 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:12.939 09:58:43 -- common/autotest_common.sh@857 -- # local i 00:06:12.939 09:58:43 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:12.939 09:58:43 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:12.939 09:58:43 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:12.939 09:58:43 -- common/autotest_common.sh@861 -- # break 00:06:12.939 09:58:43 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:12.939 09:58:43 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:12.939 09:58:43 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.939 1+0 records in 00:06:12.939 1+0 records out 00:06:12.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000314176 s, 13.0 MB/s 00:06:12.939 09:58:43 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.939 09:58:43 -- common/autotest_common.sh@874 -- # size=4096 00:06:12.939 09:58:43 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.939 09:58:43 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:12.939 09:58:43 -- common/autotest_common.sh@877 -- # return 0 00:06:12.939 09:58:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.939 09:58:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.939 09:58:43 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:13.196 /dev/nbd1 00:06:13.197 09:58:43 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:13.197 09:58:43 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:13.197 09:58:43 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:13.197 09:58:43 -- common/autotest_common.sh@857 -- # local i 00:06:13.197 09:58:43 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:13.197 09:58:43 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:13.197 09:58:43 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:13.197 09:58:43 -- common/autotest_common.sh@861 -- # break 00:06:13.197 09:58:43 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:13.197 09:58:43 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:13.197 09:58:43 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:13.197 1+0 records in 00:06:13.197 1+0 records out 00:06:13.197 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000439502 s, 9.3 MB/s 00:06:13.197 09:58:43 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.197 09:58:43 -- common/autotest_common.sh@874 -- # size=4096 00:06:13.197 09:58:43 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:13.454 09:58:43 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:13.454 09:58:43 -- common/autotest_common.sh@877 -- # return 0 00:06:13.454 09:58:43 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:13.454 09:58:43 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:13.454 09:58:43 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.454 09:58:43 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.454 09:58:43 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:13.711 { 00:06:13.711 "bdev_name": "Malloc0", 00:06:13.711 "nbd_device": "/dev/nbd0" 00:06:13.711 }, 00:06:13.711 { 00:06:13.711 "bdev_name": "Malloc1", 00:06:13.711 "nbd_device": "/dev/nbd1" 00:06:13.711 } 00:06:13.711 ]' 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:13.711 { 00:06:13.711 "bdev_name": "Malloc0", 00:06:13.711 "nbd_device": "/dev/nbd0" 00:06:13.711 }, 00:06:13.711 { 00:06:13.711 "bdev_name": "Malloc1", 00:06:13.711 "nbd_device": "/dev/nbd1" 00:06:13.711 } 00:06:13.711 ]' 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:13.711 /dev/nbd1' 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:13.711 /dev/nbd1' 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@65 -- # count=2 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@95 -- # count=2 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:13.711 256+0 records in 00:06:13.711 256+0 records out 00:06:13.711 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00829303 s, 126 MB/s 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:13.711 256+0 records in 00:06:13.711 256+0 records out 00:06:13.711 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0260747 s, 40.2 MB/s 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:13.711 256+0 records in 00:06:13.711 256+0 records out 00:06:13.711 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0293924 s, 35.7 MB/s 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@51 -- # local i 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.711 09:58:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:13.969 09:58:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:13.969 09:58:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:13.969 09:58:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:13.969 09:58:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.969 09:58:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.969 09:58:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:13.969 09:58:44 -- bdev/nbd_common.sh@41 -- # break 00:06:13.969 09:58:44 -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.969 09:58:44 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.969 09:58:44 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:14.228 09:58:44 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:14.228 09:58:44 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:14.228 09:58:44 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:14.228 09:58:44 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:14.228 09:58:44 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:14.228 09:58:44 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:14.228 09:58:44 -- bdev/nbd_common.sh@41 -- # break 00:06:14.228 09:58:44 -- bdev/nbd_common.sh@45 -- # return 0 00:06:14.228 09:58:44 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:14.228 09:58:44 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.228 09:58:44 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:14.486 09:58:44 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:14.486 09:58:45 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:14.486 09:58:45 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:14.744 09:58:45 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:14.744 09:58:45 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:14.744 09:58:45 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:14.744 09:58:45 -- bdev/nbd_common.sh@65 -- # true 00:06:14.744 09:58:45 -- bdev/nbd_common.sh@65 -- # count=0 00:06:14.744 09:58:45 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:14.744 09:58:45 -- bdev/nbd_common.sh@104 -- # count=0 00:06:14.744 09:58:45 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:14.744 09:58:45 -- bdev/nbd_common.sh@109 -- # return 0 00:06:14.744 09:58:45 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:15.002 09:58:45 -- event/event.sh@35 -- # sleep 3 00:06:15.002 [2024-07-14 09:58:45.538919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.260 [2024-07-14 09:58:45.608621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.260 [2024-07-14 09:58:45.608633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.260 [2024-07-14 09:58:45.663764] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:15.260 [2024-07-14 09:58:45.663850] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:18.540 09:58:48 -- event/event.sh@23 -- # for i in {0..2} 00:06:18.540 spdk_app_start Round 2 00:06:18.540 09:58:48 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:18.540 09:58:48 -- event/event.sh@25 -- # waitforlisten 68758 /var/tmp/spdk-nbd.sock 00:06:18.540 09:58:48 -- common/autotest_common.sh@819 -- # '[' -z 68758 ']' 00:06:18.540 09:58:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:18.540 09:58:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:18.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:18.540 09:58:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:18.540 09:58:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:18.540 09:58:48 -- common/autotest_common.sh@10 -- # set +x 00:06:18.540 09:58:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:18.540 09:58:48 -- common/autotest_common.sh@852 -- # return 0 00:06:18.540 09:58:48 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.540 Malloc0 00:06:18.540 09:58:48 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.798 Malloc1 00:06:18.798 09:58:49 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.798 09:58:49 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.798 09:58:49 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.798 09:58:49 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:18.798 09:58:49 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.798 09:58:49 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:18.798 09:58:49 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.798 09:58:49 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.798 09:58:49 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.798 09:58:49 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:18.798 09:58:49 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.798 09:58:49 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:18.798 09:58:49 -- bdev/nbd_common.sh@12 -- # local i 00:06:18.798 09:58:49 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:18.798 09:58:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.798 09:58:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:19.057 /dev/nbd0 00:06:19.057 09:58:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.057 09:58:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.057 09:58:49 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:19.057 09:58:49 -- common/autotest_common.sh@857 -- # local i 00:06:19.057 09:58:49 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:19.057 09:58:49 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:19.057 09:58:49 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:19.057 09:58:49 -- common/autotest_common.sh@861 -- # break 00:06:19.057 09:58:49 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:19.057 09:58:49 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:19.057 09:58:49 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.057 1+0 records in 00:06:19.057 1+0 records out 00:06:19.057 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364978 s, 11.2 MB/s 00:06:19.057 09:58:49 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.057 09:58:49 -- common/autotest_common.sh@874 -- # size=4096 00:06:19.057 09:58:49 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.057 09:58:49 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:19.057 09:58:49 -- common/autotest_common.sh@877 -- # return 0 00:06:19.057 09:58:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.057 09:58:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.057 09:58:49 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:19.316 /dev/nbd1 00:06:19.316 09:58:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.316 09:58:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.316 09:58:49 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:19.316 09:58:49 -- common/autotest_common.sh@857 -- # local i 00:06:19.316 09:58:49 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:19.316 09:58:49 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:19.316 09:58:49 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:19.316 09:58:49 -- common/autotest_common.sh@861 -- # break 00:06:19.316 09:58:49 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:19.316 09:58:49 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:19.316 09:58:49 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.316 1+0 records in 00:06:19.316 1+0 records out 00:06:19.316 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318499 s, 12.9 MB/s 00:06:19.316 09:58:49 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.316 09:58:49 -- common/autotest_common.sh@874 -- # size=4096 00:06:19.316 09:58:49 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:19.316 09:58:49 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:19.316 09:58:49 -- common/autotest_common.sh@877 -- # return 0 00:06:19.316 09:58:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.316 09:58:49 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.316 09:58:49 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.316 09:58:49 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.316 09:58:49 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.578 09:58:50 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:19.578 { 00:06:19.578 "bdev_name": "Malloc0", 00:06:19.578 "nbd_device": "/dev/nbd0" 00:06:19.578 }, 00:06:19.578 { 00:06:19.578 "bdev_name": "Malloc1", 00:06:19.578 "nbd_device": "/dev/nbd1" 00:06:19.578 } 00:06:19.578 ]' 00:06:19.578 09:58:50 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:19.578 { 00:06:19.578 "bdev_name": "Malloc0", 00:06:19.578 "nbd_device": "/dev/nbd0" 00:06:19.578 }, 00:06:19.578 { 00:06:19.578 "bdev_name": "Malloc1", 00:06:19.578 "nbd_device": "/dev/nbd1" 00:06:19.578 } 00:06:19.578 ]' 00:06:19.578 09:58:50 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.578 09:58:50 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:19.578 /dev/nbd1' 00:06:19.578 09:58:50 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:19.578 /dev/nbd1' 00:06:19.578 09:58:50 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.578 09:58:50 -- bdev/nbd_common.sh@65 -- # count=2 00:06:19.578 09:58:50 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:19.578 09:58:50 -- bdev/nbd_common.sh@95 -- # count=2 00:06:19.578 09:58:50 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:19.578 09:58:50 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:19.578 09:58:50 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.578 09:58:50 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.578 09:58:50 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:19.578 09:58:50 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:19.578 09:58:50 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:19.578 09:58:50 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:19.578 256+0 records in 00:06:19.578 256+0 records out 00:06:19.578 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00633283 s, 166 MB/s 00:06:19.578 09:58:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.578 09:58:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:19.845 256+0 records in 00:06:19.845 256+0 records out 00:06:19.845 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268646 s, 39.0 MB/s 00:06:19.845 09:58:50 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.845 09:58:50 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:19.845 256+0 records in 00:06:19.845 256+0 records out 00:06:19.845 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0290628 s, 36.1 MB/s 00:06:19.845 09:58:50 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:19.845 09:58:50 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.845 09:58:50 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.845 09:58:50 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:19.845 09:58:50 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:19.845 09:58:50 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:19.845 09:58:50 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:19.845 09:58:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.845 09:58:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:19.845 09:58:50 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.845 09:58:50 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:19.845 09:58:50 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:19.845 09:58:50 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:19.845 09:58:50 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.845 09:58:50 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.845 09:58:50 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.845 09:58:50 -- bdev/nbd_common.sh@51 -- # local i 00:06:19.845 09:58:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.845 09:58:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.103 09:58:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.103 09:58:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.103 09:58:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.103 09:58:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.103 09:58:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.103 09:58:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.103 09:58:50 -- bdev/nbd_common.sh@41 -- # break 00:06:20.103 09:58:50 -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.103 09:58:50 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.103 09:58:50 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.362 09:58:50 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.362 09:58:50 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.362 09:58:50 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.362 09:58:50 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.362 09:58:50 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.362 09:58:50 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.362 09:58:50 -- bdev/nbd_common.sh@41 -- # break 00:06:20.362 09:58:50 -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.362 09:58:50 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.362 09:58:50 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.362 09:58:50 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.620 09:58:50 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:20.620 09:58:51 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:20.620 09:58:51 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.620 09:58:51 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:20.620 09:58:51 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.620 09:58:51 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:20.620 09:58:51 -- bdev/nbd_common.sh@65 -- # true 00:06:20.620 09:58:51 -- bdev/nbd_common.sh@65 -- # count=0 00:06:20.620 09:58:51 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:20.620 09:58:51 -- bdev/nbd_common.sh@104 -- # count=0 00:06:20.620 09:58:51 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:20.620 09:58:51 -- bdev/nbd_common.sh@109 -- # return 0 00:06:20.620 09:58:51 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:20.878 09:58:51 -- event/event.sh@35 -- # sleep 3 00:06:21.136 [2024-07-14 09:58:51.541475] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.136 [2024-07-14 09:58:51.598762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.136 [2024-07-14 09:58:51.598775] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.136 [2024-07-14 09:58:51.653850] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:21.136 [2024-07-14 09:58:51.653936] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:24.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:24.421 09:58:54 -- event/event.sh@38 -- # waitforlisten 68758 /var/tmp/spdk-nbd.sock 00:06:24.421 09:58:54 -- common/autotest_common.sh@819 -- # '[' -z 68758 ']' 00:06:24.421 09:58:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:24.421 09:58:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:24.421 09:58:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:24.421 09:58:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:24.421 09:58:54 -- common/autotest_common.sh@10 -- # set +x 00:06:24.421 09:58:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:24.421 09:58:54 -- common/autotest_common.sh@852 -- # return 0 00:06:24.421 09:58:54 -- event/event.sh@39 -- # killprocess 68758 00:06:24.421 09:58:54 -- common/autotest_common.sh@926 -- # '[' -z 68758 ']' 00:06:24.421 09:58:54 -- common/autotest_common.sh@930 -- # kill -0 68758 00:06:24.421 09:58:54 -- common/autotest_common.sh@931 -- # uname 00:06:24.421 09:58:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:24.421 09:58:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68758 00:06:24.421 killing process with pid 68758 00:06:24.421 09:58:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:24.421 09:58:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:24.421 09:58:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68758' 00:06:24.421 09:58:54 -- common/autotest_common.sh@945 -- # kill 68758 00:06:24.421 09:58:54 -- common/autotest_common.sh@950 -- # wait 68758 00:06:24.421 spdk_app_start is called in Round 0. 00:06:24.421 Shutdown signal received, stop current app iteration 00:06:24.421 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:06:24.421 spdk_app_start is called in Round 1. 00:06:24.421 Shutdown signal received, stop current app iteration 00:06:24.421 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:06:24.421 spdk_app_start is called in Round 2. 00:06:24.421 Shutdown signal received, stop current app iteration 00:06:24.421 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 reinitialization... 00:06:24.421 spdk_app_start is called in Round 3. 00:06:24.421 Shutdown signal received, stop current app iteration 00:06:24.421 09:58:54 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:24.421 09:58:54 -- event/event.sh@42 -- # return 0 00:06:24.421 00:06:24.421 real 0m19.028s 00:06:24.421 user 0m42.726s 00:06:24.421 sys 0m3.012s 00:06:24.421 09:58:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.421 09:58:54 -- common/autotest_common.sh@10 -- # set +x 00:06:24.421 ************************************ 00:06:24.421 END TEST app_repeat 00:06:24.421 ************************************ 00:06:24.421 09:58:54 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:24.421 09:58:54 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:24.421 09:58:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:24.421 09:58:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.421 09:58:54 -- common/autotest_common.sh@10 -- # set +x 00:06:24.421 ************************************ 00:06:24.421 START TEST cpu_locks 00:06:24.421 ************************************ 00:06:24.421 09:58:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:24.421 * Looking for test storage... 00:06:24.421 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:24.421 09:58:54 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:24.421 09:58:54 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:24.421 09:58:54 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:24.421 09:58:54 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:24.421 09:58:54 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:24.421 09:58:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.421 09:58:54 -- common/autotest_common.sh@10 -- # set +x 00:06:24.421 ************************************ 00:06:24.421 START TEST default_locks 00:06:24.421 ************************************ 00:06:24.421 09:58:54 -- common/autotest_common.sh@1104 -- # default_locks 00:06:24.421 09:58:54 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=69382 00:06:24.421 09:58:54 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.422 09:58:54 -- event/cpu_locks.sh@47 -- # waitforlisten 69382 00:06:24.422 09:58:54 -- common/autotest_common.sh@819 -- # '[' -z 69382 ']' 00:06:24.422 09:58:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.422 09:58:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:24.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.422 09:58:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.422 09:58:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:24.422 09:58:54 -- common/autotest_common.sh@10 -- # set +x 00:06:24.679 [2024-07-14 09:58:55.011417] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:24.679 [2024-07-14 09:58:55.011520] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69382 ] 00:06:24.679 [2024-07-14 09:58:55.142512] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.679 [2024-07-14 09:58:55.220089] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:24.679 [2024-07-14 09:58:55.220319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.613 09:58:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:25.614 09:58:55 -- common/autotest_common.sh@852 -- # return 0 00:06:25.614 09:58:55 -- event/cpu_locks.sh@49 -- # locks_exist 69382 00:06:25.614 09:58:55 -- event/cpu_locks.sh@22 -- # lslocks -p 69382 00:06:25.614 09:58:55 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.872 09:58:56 -- event/cpu_locks.sh@50 -- # killprocess 69382 00:06:25.872 09:58:56 -- common/autotest_common.sh@926 -- # '[' -z 69382 ']' 00:06:25.872 09:58:56 -- common/autotest_common.sh@930 -- # kill -0 69382 00:06:25.872 09:58:56 -- common/autotest_common.sh@931 -- # uname 00:06:25.872 09:58:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:25.872 09:58:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69382 00:06:25.872 09:58:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:25.872 09:58:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:25.872 killing process with pid 69382 00:06:25.872 09:58:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69382' 00:06:25.872 09:58:56 -- common/autotest_common.sh@945 -- # kill 69382 00:06:25.872 09:58:56 -- common/autotest_common.sh@950 -- # wait 69382 00:06:26.440 09:58:56 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 69382 00:06:26.440 09:58:56 -- common/autotest_common.sh@640 -- # local es=0 00:06:26.440 09:58:56 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 69382 00:06:26.440 09:58:56 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:26.440 09:58:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:26.440 09:58:56 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:26.440 09:58:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:26.440 09:58:56 -- common/autotest_common.sh@643 -- # waitforlisten 69382 00:06:26.440 09:58:56 -- common/autotest_common.sh@819 -- # '[' -z 69382 ']' 00:06:26.440 09:58:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.440 09:58:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:26.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.440 09:58:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.440 09:58:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:26.440 09:58:56 -- common/autotest_common.sh@10 -- # set +x 00:06:26.440 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (69382) - No such process 00:06:26.440 ERROR: process (pid: 69382) is no longer running 00:06:26.440 09:58:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:26.440 09:58:56 -- common/autotest_common.sh@852 -- # return 1 00:06:26.440 09:58:56 -- common/autotest_common.sh@643 -- # es=1 00:06:26.440 09:58:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:26.440 09:58:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:26.440 09:58:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:26.440 09:58:56 -- event/cpu_locks.sh@54 -- # no_locks 00:06:26.440 09:58:56 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:26.440 09:58:56 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:26.440 09:58:56 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:26.440 00:06:26.440 real 0m1.781s 00:06:26.440 user 0m1.862s 00:06:26.440 sys 0m0.569s 00:06:26.440 09:58:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.440 09:58:56 -- common/autotest_common.sh@10 -- # set +x 00:06:26.440 ************************************ 00:06:26.440 END TEST default_locks 00:06:26.440 ************************************ 00:06:26.440 09:58:56 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:26.440 09:58:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:26.440 09:58:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:26.440 09:58:56 -- common/autotest_common.sh@10 -- # set +x 00:06:26.440 ************************************ 00:06:26.440 START TEST default_locks_via_rpc 00:06:26.440 ************************************ 00:06:26.440 09:58:56 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:26.440 09:58:56 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=69446 00:06:26.440 09:58:56 -- event/cpu_locks.sh@63 -- # waitforlisten 69446 00:06:26.440 09:58:56 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.440 09:58:56 -- common/autotest_common.sh@819 -- # '[' -z 69446 ']' 00:06:26.440 09:58:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.440 09:58:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:26.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.440 09:58:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.440 09:58:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:26.440 09:58:56 -- common/autotest_common.sh@10 -- # set +x 00:06:26.440 [2024-07-14 09:58:56.857689] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:26.440 [2024-07-14 09:58:56.858402] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69446 ] 00:06:26.699 [2024-07-14 09:58:57.000132] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.699 [2024-07-14 09:58:57.092394] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:26.699 [2024-07-14 09:58:57.092572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.636 09:58:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:27.636 09:58:57 -- common/autotest_common.sh@852 -- # return 0 00:06:27.636 09:58:57 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:27.636 09:58:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:27.636 09:58:57 -- common/autotest_common.sh@10 -- # set +x 00:06:27.636 09:58:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:27.636 09:58:57 -- event/cpu_locks.sh@67 -- # no_locks 00:06:27.636 09:58:57 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:27.636 09:58:57 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:27.636 09:58:57 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:27.636 09:58:57 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:27.636 09:58:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:27.636 09:58:57 -- common/autotest_common.sh@10 -- # set +x 00:06:27.636 09:58:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:27.636 09:58:57 -- event/cpu_locks.sh@71 -- # locks_exist 69446 00:06:27.636 09:58:57 -- event/cpu_locks.sh@22 -- # lslocks -p 69446 00:06:27.636 09:58:57 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:27.895 09:58:58 -- event/cpu_locks.sh@73 -- # killprocess 69446 00:06:27.895 09:58:58 -- common/autotest_common.sh@926 -- # '[' -z 69446 ']' 00:06:27.895 09:58:58 -- common/autotest_common.sh@930 -- # kill -0 69446 00:06:27.895 09:58:58 -- common/autotest_common.sh@931 -- # uname 00:06:27.895 09:58:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:27.895 09:58:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69446 00:06:27.895 09:58:58 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:27.895 09:58:58 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:27.895 killing process with pid 69446 00:06:27.895 09:58:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69446' 00:06:27.895 09:58:58 -- common/autotest_common.sh@945 -- # kill 69446 00:06:27.895 09:58:58 -- common/autotest_common.sh@950 -- # wait 69446 00:06:28.463 00:06:28.463 real 0m1.926s 00:06:28.463 user 0m2.102s 00:06:28.464 sys 0m0.577s 00:06:28.464 09:58:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.464 09:58:58 -- common/autotest_common.sh@10 -- # set +x 00:06:28.464 ************************************ 00:06:28.464 END TEST default_locks_via_rpc 00:06:28.464 ************************************ 00:06:28.464 09:58:58 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:28.464 09:58:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:28.464 09:58:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:28.464 09:58:58 -- common/autotest_common.sh@10 -- # set +x 00:06:28.464 ************************************ 00:06:28.464 START TEST non_locking_app_on_locked_coremask 00:06:28.464 ************************************ 00:06:28.464 09:58:58 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:28.464 09:58:58 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=69515 00:06:28.464 09:58:58 -- event/cpu_locks.sh@81 -- # waitforlisten 69515 /var/tmp/spdk.sock 00:06:28.464 09:58:58 -- common/autotest_common.sh@819 -- # '[' -z 69515 ']' 00:06:28.464 09:58:58 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:28.464 09:58:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.464 09:58:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:28.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.464 09:58:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.464 09:58:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:28.464 09:58:58 -- common/autotest_common.sh@10 -- # set +x 00:06:28.464 [2024-07-14 09:58:58.844718] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:28.464 [2024-07-14 09:58:58.844832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69515 ] 00:06:28.464 [2024-07-14 09:58:58.983402] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.723 [2024-07-14 09:58:59.043206] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:28.723 [2024-07-14 09:58:59.043454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.290 09:58:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:29.290 09:58:59 -- common/autotest_common.sh@852 -- # return 0 00:06:29.290 09:58:59 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=69543 00:06:29.290 09:58:59 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:29.290 09:58:59 -- event/cpu_locks.sh@85 -- # waitforlisten 69543 /var/tmp/spdk2.sock 00:06:29.290 09:58:59 -- common/autotest_common.sh@819 -- # '[' -z 69543 ']' 00:06:29.290 09:58:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.290 09:58:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:29.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.291 09:58:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.291 09:58:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:29.291 09:58:59 -- common/autotest_common.sh@10 -- # set +x 00:06:29.549 [2024-07-14 09:58:59.885110] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:29.549 [2024-07-14 09:58:59.885259] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69543 ] 00:06:29.549 [2024-07-14 09:59:00.024971] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:29.549 [2024-07-14 09:59:00.025041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.808 [2024-07-14 09:59:00.204040] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:29.808 [2024-07-14 09:59:00.204263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.394 09:59:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:30.394 09:59:00 -- common/autotest_common.sh@852 -- # return 0 00:06:30.394 09:59:00 -- event/cpu_locks.sh@87 -- # locks_exist 69515 00:06:30.394 09:59:00 -- event/cpu_locks.sh@22 -- # lslocks -p 69515 00:06:30.394 09:59:00 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.350 09:59:01 -- event/cpu_locks.sh@89 -- # killprocess 69515 00:06:31.350 09:59:01 -- common/autotest_common.sh@926 -- # '[' -z 69515 ']' 00:06:31.350 09:59:01 -- common/autotest_common.sh@930 -- # kill -0 69515 00:06:31.350 09:59:01 -- common/autotest_common.sh@931 -- # uname 00:06:31.350 09:59:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:31.350 09:59:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69515 00:06:31.350 09:59:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:31.350 killing process with pid 69515 00:06:31.350 09:59:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:31.350 09:59:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69515' 00:06:31.350 09:59:01 -- common/autotest_common.sh@945 -- # kill 69515 00:06:31.350 09:59:01 -- common/autotest_common.sh@950 -- # wait 69515 00:06:31.916 09:59:02 -- event/cpu_locks.sh@90 -- # killprocess 69543 00:06:31.917 09:59:02 -- common/autotest_common.sh@926 -- # '[' -z 69543 ']' 00:06:31.917 09:59:02 -- common/autotest_common.sh@930 -- # kill -0 69543 00:06:31.917 09:59:02 -- common/autotest_common.sh@931 -- # uname 00:06:31.917 09:59:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:31.917 09:59:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69543 00:06:31.917 09:59:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:31.917 09:59:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:31.917 killing process with pid 69543 00:06:31.917 09:59:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69543' 00:06:31.917 09:59:02 -- common/autotest_common.sh@945 -- # kill 69543 00:06:31.917 09:59:02 -- common/autotest_common.sh@950 -- # wait 69543 00:06:32.485 00:06:32.485 real 0m4.057s 00:06:32.485 user 0m4.478s 00:06:32.485 sys 0m1.117s 00:06:32.485 09:59:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.485 ************************************ 00:06:32.485 END TEST non_locking_app_on_locked_coremask 00:06:32.485 09:59:02 -- common/autotest_common.sh@10 -- # set +x 00:06:32.485 ************************************ 00:06:32.485 09:59:02 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:32.485 09:59:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:32.485 09:59:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.485 09:59:02 -- common/autotest_common.sh@10 -- # set +x 00:06:32.485 ************************************ 00:06:32.485 START TEST locking_app_on_unlocked_coremask 00:06:32.485 ************************************ 00:06:32.485 09:59:02 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:32.485 09:59:02 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=69624 00:06:32.485 09:59:02 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:32.485 09:59:02 -- event/cpu_locks.sh@99 -- # waitforlisten 69624 /var/tmp/spdk.sock 00:06:32.485 09:59:02 -- common/autotest_common.sh@819 -- # '[' -z 69624 ']' 00:06:32.485 09:59:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.485 09:59:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:32.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.485 09:59:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.485 09:59:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:32.485 09:59:02 -- common/autotest_common.sh@10 -- # set +x 00:06:32.485 [2024-07-14 09:59:02.957345] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:32.485 [2024-07-14 09:59:02.957480] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69624 ] 00:06:32.744 [2024-07-14 09:59:03.093117] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.744 [2024-07-14 09:59:03.093162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.744 [2024-07-14 09:59:03.180523] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:32.744 [2024-07-14 09:59:03.180694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.677 09:59:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:33.677 09:59:03 -- common/autotest_common.sh@852 -- # return 0 00:06:33.677 09:59:03 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=69652 00:06:33.677 09:59:03 -- event/cpu_locks.sh@103 -- # waitforlisten 69652 /var/tmp/spdk2.sock 00:06:33.677 09:59:03 -- common/autotest_common.sh@819 -- # '[' -z 69652 ']' 00:06:33.677 09:59:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.677 09:59:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:33.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.677 09:59:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.677 09:59:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:33.677 09:59:03 -- common/autotest_common.sh@10 -- # set +x 00:06:33.677 09:59:03 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:33.677 [2024-07-14 09:59:03.986522] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:33.677 [2024-07-14 09:59:03.986620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69652 ] 00:06:33.677 [2024-07-14 09:59:04.130977] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.934 [2024-07-14 09:59:04.360977] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:33.934 [2024-07-14 09:59:04.361154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.306 09:59:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:35.306 09:59:05 -- common/autotest_common.sh@852 -- # return 0 00:06:35.306 09:59:05 -- event/cpu_locks.sh@105 -- # locks_exist 69652 00:06:35.306 09:59:05 -- event/cpu_locks.sh@22 -- # lslocks -p 69652 00:06:35.306 09:59:05 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.237 09:59:06 -- event/cpu_locks.sh@107 -- # killprocess 69624 00:06:36.237 09:59:06 -- common/autotest_common.sh@926 -- # '[' -z 69624 ']' 00:06:36.237 09:59:06 -- common/autotest_common.sh@930 -- # kill -0 69624 00:06:36.237 09:59:06 -- common/autotest_common.sh@931 -- # uname 00:06:36.237 09:59:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:36.237 09:59:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69624 00:06:36.237 09:59:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:36.237 09:59:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:36.237 killing process with pid 69624 00:06:36.237 09:59:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69624' 00:06:36.237 09:59:06 -- common/autotest_common.sh@945 -- # kill 69624 00:06:36.237 09:59:06 -- common/autotest_common.sh@950 -- # wait 69624 00:06:37.170 09:59:07 -- event/cpu_locks.sh@108 -- # killprocess 69652 00:06:37.170 09:59:07 -- common/autotest_common.sh@926 -- # '[' -z 69652 ']' 00:06:37.170 09:59:07 -- common/autotest_common.sh@930 -- # kill -0 69652 00:06:37.170 09:59:07 -- common/autotest_common.sh@931 -- # uname 00:06:37.171 09:59:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:37.171 09:59:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69652 00:06:37.171 09:59:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:37.171 09:59:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:37.171 killing process with pid 69652 00:06:37.171 09:59:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69652' 00:06:37.171 09:59:07 -- common/autotest_common.sh@945 -- # kill 69652 00:06:37.171 09:59:07 -- common/autotest_common.sh@950 -- # wait 69652 00:06:37.736 00:06:37.736 real 0m5.176s 00:06:37.736 user 0m5.538s 00:06:37.736 sys 0m1.375s 00:06:37.736 09:59:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.736 09:59:08 -- common/autotest_common.sh@10 -- # set +x 00:06:37.736 ************************************ 00:06:37.736 END TEST locking_app_on_unlocked_coremask 00:06:37.736 ************************************ 00:06:37.736 09:59:08 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:37.736 09:59:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:37.736 09:59:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.736 09:59:08 -- common/autotest_common.sh@10 -- # set +x 00:06:37.736 ************************************ 00:06:37.736 START TEST locking_app_on_locked_coremask 00:06:37.736 ************************************ 00:06:37.736 09:59:08 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:37.736 09:59:08 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=69750 00:06:37.736 09:59:08 -- event/cpu_locks.sh@116 -- # waitforlisten 69750 /var/tmp/spdk.sock 00:06:37.736 09:59:08 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.736 09:59:08 -- common/autotest_common.sh@819 -- # '[' -z 69750 ']' 00:06:37.736 09:59:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.736 09:59:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:37.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.736 09:59:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.736 09:59:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:37.736 09:59:08 -- common/autotest_common.sh@10 -- # set +x 00:06:37.736 [2024-07-14 09:59:08.182829] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:37.736 [2024-07-14 09:59:08.183586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69750 ] 00:06:37.994 [2024-07-14 09:59:08.322142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.994 [2024-07-14 09:59:08.433211] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:37.994 [2024-07-14 09:59:08.433469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.927 09:59:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:38.927 09:59:09 -- common/autotest_common.sh@852 -- # return 0 00:06:38.927 09:59:09 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=69778 00:06:38.927 09:59:09 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 69778 /var/tmp/spdk2.sock 00:06:38.927 09:59:09 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:38.927 09:59:09 -- common/autotest_common.sh@640 -- # local es=0 00:06:38.927 09:59:09 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 69778 /var/tmp/spdk2.sock 00:06:38.927 09:59:09 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:38.927 09:59:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:38.927 09:59:09 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:38.927 09:59:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:38.927 09:59:09 -- common/autotest_common.sh@643 -- # waitforlisten 69778 /var/tmp/spdk2.sock 00:06:38.927 09:59:09 -- common/autotest_common.sh@819 -- # '[' -z 69778 ']' 00:06:38.927 09:59:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.927 09:59:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:38.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.927 09:59:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.927 09:59:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:38.927 09:59:09 -- common/autotest_common.sh@10 -- # set +x 00:06:38.927 [2024-07-14 09:59:09.199193] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:38.927 [2024-07-14 09:59:09.199309] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69778 ] 00:06:38.927 [2024-07-14 09:59:09.340920] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 69750 has claimed it. 00:06:38.927 [2024-07-14 09:59:09.340983] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:39.493 ERROR: process (pid: 69778) is no longer running 00:06:39.493 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (69778) - No such process 00:06:39.493 09:59:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:39.493 09:59:09 -- common/autotest_common.sh@852 -- # return 1 00:06:39.493 09:59:09 -- common/autotest_common.sh@643 -- # es=1 00:06:39.493 09:59:09 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:39.493 09:59:09 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:39.493 09:59:09 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:39.493 09:59:09 -- event/cpu_locks.sh@122 -- # locks_exist 69750 00:06:39.493 09:59:09 -- event/cpu_locks.sh@22 -- # lslocks -p 69750 00:06:39.493 09:59:09 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:39.752 09:59:10 -- event/cpu_locks.sh@124 -- # killprocess 69750 00:06:39.752 09:59:10 -- common/autotest_common.sh@926 -- # '[' -z 69750 ']' 00:06:39.752 09:59:10 -- common/autotest_common.sh@930 -- # kill -0 69750 00:06:39.752 09:59:10 -- common/autotest_common.sh@931 -- # uname 00:06:39.752 09:59:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:39.752 09:59:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69750 00:06:39.752 09:59:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:39.752 killing process with pid 69750 00:06:39.752 09:59:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:39.752 09:59:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69750' 00:06:39.752 09:59:10 -- common/autotest_common.sh@945 -- # kill 69750 00:06:39.752 09:59:10 -- common/autotest_common.sh@950 -- # wait 69750 00:06:40.319 00:06:40.319 real 0m2.531s 00:06:40.319 user 0m2.735s 00:06:40.319 sys 0m0.682s 00:06:40.319 09:59:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.319 09:59:10 -- common/autotest_common.sh@10 -- # set +x 00:06:40.319 ************************************ 00:06:40.319 END TEST locking_app_on_locked_coremask 00:06:40.319 ************************************ 00:06:40.319 09:59:10 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:40.319 09:59:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:40.319 09:59:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.319 09:59:10 -- common/autotest_common.sh@10 -- # set +x 00:06:40.319 ************************************ 00:06:40.319 START TEST locking_overlapped_coremask 00:06:40.319 ************************************ 00:06:40.319 09:59:10 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:40.319 09:59:10 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=69824 00:06:40.319 09:59:10 -- event/cpu_locks.sh@133 -- # waitforlisten 69824 /var/tmp/spdk.sock 00:06:40.319 09:59:10 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:40.319 09:59:10 -- common/autotest_common.sh@819 -- # '[' -z 69824 ']' 00:06:40.319 09:59:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.319 09:59:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:40.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.319 09:59:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.319 09:59:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:40.319 09:59:10 -- common/autotest_common.sh@10 -- # set +x 00:06:40.319 [2024-07-14 09:59:10.777473] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:40.319 [2024-07-14 09:59:10.777579] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69824 ] 00:06:40.578 [2024-07-14 09:59:10.918692] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:40.578 [2024-07-14 09:59:11.034516] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:40.578 [2024-07-14 09:59:11.034751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.578 [2024-07-14 09:59:11.035130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.578 [2024-07-14 09:59:11.035141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.551 09:59:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:41.551 09:59:11 -- common/autotest_common.sh@852 -- # return 0 00:06:41.551 09:59:11 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=69854 00:06:41.551 09:59:11 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 69854 /var/tmp/spdk2.sock 00:06:41.551 09:59:11 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:41.551 09:59:11 -- common/autotest_common.sh@640 -- # local es=0 00:06:41.551 09:59:11 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 69854 /var/tmp/spdk2.sock 00:06:41.551 09:59:11 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:41.551 09:59:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:41.551 09:59:11 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:41.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:41.551 09:59:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:41.551 09:59:11 -- common/autotest_common.sh@643 -- # waitforlisten 69854 /var/tmp/spdk2.sock 00:06:41.551 09:59:11 -- common/autotest_common.sh@819 -- # '[' -z 69854 ']' 00:06:41.551 09:59:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:41.551 09:59:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:41.551 09:59:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:41.551 09:59:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:41.551 09:59:11 -- common/autotest_common.sh@10 -- # set +x 00:06:41.551 [2024-07-14 09:59:11.823882] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:41.552 [2024-07-14 09:59:11.823982] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69854 ] 00:06:41.552 [2024-07-14 09:59:11.970429] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69824 has claimed it. 00:06:41.552 [2024-07-14 09:59:11.970495] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:42.117 ERROR: process (pid: 69854) is no longer running 00:06:42.117 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (69854) - No such process 00:06:42.117 09:59:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:42.117 09:59:12 -- common/autotest_common.sh@852 -- # return 1 00:06:42.117 09:59:12 -- common/autotest_common.sh@643 -- # es=1 00:06:42.117 09:59:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:42.117 09:59:12 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:42.117 09:59:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:42.117 09:59:12 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:42.117 09:59:12 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:42.117 09:59:12 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:42.117 09:59:12 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:42.117 09:59:12 -- event/cpu_locks.sh@141 -- # killprocess 69824 00:06:42.117 09:59:12 -- common/autotest_common.sh@926 -- # '[' -z 69824 ']' 00:06:42.117 09:59:12 -- common/autotest_common.sh@930 -- # kill -0 69824 00:06:42.117 09:59:12 -- common/autotest_common.sh@931 -- # uname 00:06:42.117 09:59:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:42.118 09:59:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69824 00:06:42.118 09:59:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:42.118 09:59:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:42.118 09:59:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69824' 00:06:42.118 killing process with pid 69824 00:06:42.118 09:59:12 -- common/autotest_common.sh@945 -- # kill 69824 00:06:42.118 09:59:12 -- common/autotest_common.sh@950 -- # wait 69824 00:06:42.683 00:06:42.683 real 0m2.406s 00:06:42.683 user 0m6.535s 00:06:42.683 sys 0m0.563s 00:06:42.683 09:59:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.683 09:59:13 -- common/autotest_common.sh@10 -- # set +x 00:06:42.683 ************************************ 00:06:42.683 END TEST locking_overlapped_coremask 00:06:42.683 ************************************ 00:06:42.683 09:59:13 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:42.683 09:59:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:42.683 09:59:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:42.683 09:59:13 -- common/autotest_common.sh@10 -- # set +x 00:06:42.683 ************************************ 00:06:42.683 START TEST locking_overlapped_coremask_via_rpc 00:06:42.683 ************************************ 00:06:42.683 09:59:13 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:42.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.683 09:59:13 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=69911 00:06:42.683 09:59:13 -- event/cpu_locks.sh@149 -- # waitforlisten 69911 /var/tmp/spdk.sock 00:06:42.683 09:59:13 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:42.683 09:59:13 -- common/autotest_common.sh@819 -- # '[' -z 69911 ']' 00:06:42.683 09:59:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.683 09:59:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:42.683 09:59:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.683 09:59:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:42.683 09:59:13 -- common/autotest_common.sh@10 -- # set +x 00:06:42.942 [2024-07-14 09:59:13.244293] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:42.942 [2024-07-14 09:59:13.244389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69911 ] 00:06:42.942 [2024-07-14 09:59:13.386126] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:42.942 [2024-07-14 09:59:13.386174] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:42.942 [2024-07-14 09:59:13.484264] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:42.942 [2024-07-14 09:59:13.484918] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.942 [2024-07-14 09:59:13.485055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.942 [2024-07-14 09:59:13.485065] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.882 09:59:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:43.882 09:59:14 -- common/autotest_common.sh@852 -- # return 0 00:06:43.882 09:59:14 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:43.882 09:59:14 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=69941 00:06:43.882 09:59:14 -- event/cpu_locks.sh@153 -- # waitforlisten 69941 /var/tmp/spdk2.sock 00:06:43.882 09:59:14 -- common/autotest_common.sh@819 -- # '[' -z 69941 ']' 00:06:43.882 09:59:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:43.882 09:59:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:43.882 09:59:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:43.882 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:43.882 09:59:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:43.882 09:59:14 -- common/autotest_common.sh@10 -- # set +x 00:06:43.882 [2024-07-14 09:59:14.243445] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:43.882 [2024-07-14 09:59:14.243532] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69941 ] 00:06:43.882 [2024-07-14 09:59:14.378444] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.882 [2024-07-14 09:59:14.378505] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:44.140 [2024-07-14 09:59:14.561898] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:44.140 [2024-07-14 09:59:14.562293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.140 [2024-07-14 09:59:14.563109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:44.140 [2024-07-14 09:59:14.563112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:44.706 09:59:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:44.706 09:59:15 -- common/autotest_common.sh@852 -- # return 0 00:06:44.706 09:59:15 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:44.706 09:59:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:44.706 09:59:15 -- common/autotest_common.sh@10 -- # set +x 00:06:44.706 09:59:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:44.706 09:59:15 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:44.706 09:59:15 -- common/autotest_common.sh@640 -- # local es=0 00:06:44.706 09:59:15 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:44.706 09:59:15 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:44.706 09:59:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:44.706 09:59:15 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:44.706 09:59:15 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:44.706 09:59:15 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:44.706 09:59:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:44.706 09:59:15 -- common/autotest_common.sh@10 -- # set +x 00:06:44.706 [2024-07-14 09:59:15.207411] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 69911 has claimed it. 00:06:44.706 2024/07/14 09:59:15 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:06:44.706 request: 00:06:44.706 { 00:06:44.706 "method": "framework_enable_cpumask_locks", 00:06:44.706 "params": {} 00:06:44.706 } 00:06:44.706 Got JSON-RPC error response 00:06:44.706 GoRPCClient: error on JSON-RPC call 00:06:44.706 09:59:15 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:44.706 09:59:15 -- common/autotest_common.sh@643 -- # es=1 00:06:44.706 09:59:15 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:44.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.706 09:59:15 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:44.706 09:59:15 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:44.706 09:59:15 -- event/cpu_locks.sh@158 -- # waitforlisten 69911 /var/tmp/spdk.sock 00:06:44.706 09:59:15 -- common/autotest_common.sh@819 -- # '[' -z 69911 ']' 00:06:44.706 09:59:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.706 09:59:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:44.706 09:59:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.706 09:59:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:44.706 09:59:15 -- common/autotest_common.sh@10 -- # set +x 00:06:44.965 09:59:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:44.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:44.965 09:59:15 -- common/autotest_common.sh@852 -- # return 0 00:06:44.965 09:59:15 -- event/cpu_locks.sh@159 -- # waitforlisten 69941 /var/tmp/spdk2.sock 00:06:44.965 09:59:15 -- common/autotest_common.sh@819 -- # '[' -z 69941 ']' 00:06:44.965 09:59:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:44.965 09:59:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:44.965 09:59:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:44.965 09:59:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:44.965 09:59:15 -- common/autotest_common.sh@10 -- # set +x 00:06:45.533 ************************************ 00:06:45.533 END TEST locking_overlapped_coremask_via_rpc 00:06:45.533 ************************************ 00:06:45.533 09:59:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:45.533 09:59:15 -- common/autotest_common.sh@852 -- # return 0 00:06:45.533 09:59:15 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:45.533 09:59:15 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:45.533 09:59:15 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:45.533 09:59:15 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:45.533 00:06:45.533 real 0m2.613s 00:06:45.533 user 0m1.328s 00:06:45.533 sys 0m0.215s 00:06:45.533 09:59:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.533 09:59:15 -- common/autotest_common.sh@10 -- # set +x 00:06:45.533 09:59:15 -- event/cpu_locks.sh@174 -- # cleanup 00:06:45.533 09:59:15 -- event/cpu_locks.sh@15 -- # [[ -z 69911 ]] 00:06:45.533 09:59:15 -- event/cpu_locks.sh@15 -- # killprocess 69911 00:06:45.533 09:59:15 -- common/autotest_common.sh@926 -- # '[' -z 69911 ']' 00:06:45.533 09:59:15 -- common/autotest_common.sh@930 -- # kill -0 69911 00:06:45.533 09:59:15 -- common/autotest_common.sh@931 -- # uname 00:06:45.533 09:59:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:45.533 09:59:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69911 00:06:45.533 killing process with pid 69911 00:06:45.533 09:59:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:45.533 09:59:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:45.533 09:59:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69911' 00:06:45.533 09:59:15 -- common/autotest_common.sh@945 -- # kill 69911 00:06:45.533 09:59:15 -- common/autotest_common.sh@950 -- # wait 69911 00:06:46.102 09:59:16 -- event/cpu_locks.sh@16 -- # [[ -z 69941 ]] 00:06:46.102 09:59:16 -- event/cpu_locks.sh@16 -- # killprocess 69941 00:06:46.102 09:59:16 -- common/autotest_common.sh@926 -- # '[' -z 69941 ']' 00:06:46.102 09:59:16 -- common/autotest_common.sh@930 -- # kill -0 69941 00:06:46.102 09:59:16 -- common/autotest_common.sh@931 -- # uname 00:06:46.102 09:59:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:46.102 09:59:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 69941 00:06:46.102 killing process with pid 69941 00:06:46.102 09:59:16 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:46.102 09:59:16 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:46.102 09:59:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 69941' 00:06:46.102 09:59:16 -- common/autotest_common.sh@945 -- # kill 69941 00:06:46.102 09:59:16 -- common/autotest_common.sh@950 -- # wait 69941 00:06:46.360 09:59:16 -- event/cpu_locks.sh@18 -- # rm -f 00:06:46.360 Process with pid 69911 is not found 00:06:46.360 Process with pid 69941 is not found 00:06:46.360 09:59:16 -- event/cpu_locks.sh@1 -- # cleanup 00:06:46.360 09:59:16 -- event/cpu_locks.sh@15 -- # [[ -z 69911 ]] 00:06:46.360 09:59:16 -- event/cpu_locks.sh@15 -- # killprocess 69911 00:06:46.360 09:59:16 -- common/autotest_common.sh@926 -- # '[' -z 69911 ']' 00:06:46.360 09:59:16 -- common/autotest_common.sh@930 -- # kill -0 69911 00:06:46.360 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (69911) - No such process 00:06:46.360 09:59:16 -- common/autotest_common.sh@953 -- # echo 'Process with pid 69911 is not found' 00:06:46.360 09:59:16 -- event/cpu_locks.sh@16 -- # [[ -z 69941 ]] 00:06:46.360 09:59:16 -- event/cpu_locks.sh@16 -- # killprocess 69941 00:06:46.360 09:59:16 -- common/autotest_common.sh@926 -- # '[' -z 69941 ']' 00:06:46.360 09:59:16 -- common/autotest_common.sh@930 -- # kill -0 69941 00:06:46.360 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (69941) - No such process 00:06:46.360 09:59:16 -- common/autotest_common.sh@953 -- # echo 'Process with pid 69941 is not found' 00:06:46.360 09:59:16 -- event/cpu_locks.sh@18 -- # rm -f 00:06:46.360 00:06:46.360 real 0m21.957s 00:06:46.360 user 0m37.679s 00:06:46.360 sys 0m6.096s 00:06:46.360 09:59:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.360 ************************************ 00:06:46.360 END TEST cpu_locks 00:06:46.360 ************************************ 00:06:46.360 09:59:16 -- common/autotest_common.sh@10 -- # set +x 00:06:46.360 00:06:46.360 real 0m50.145s 00:06:46.360 user 1m36.558s 00:06:46.360 sys 0m9.904s 00:06:46.360 09:59:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.360 09:59:16 -- common/autotest_common.sh@10 -- # set +x 00:06:46.360 ************************************ 00:06:46.360 END TEST event 00:06:46.360 ************************************ 00:06:46.619 09:59:16 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:46.619 09:59:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:46.619 09:59:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.619 09:59:16 -- common/autotest_common.sh@10 -- # set +x 00:06:46.619 ************************************ 00:06:46.619 START TEST thread 00:06:46.619 ************************************ 00:06:46.619 09:59:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:46.619 * Looking for test storage... 00:06:46.619 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:46.619 09:59:16 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:46.619 09:59:17 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:46.619 09:59:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:46.619 09:59:17 -- common/autotest_common.sh@10 -- # set +x 00:06:46.619 ************************************ 00:06:46.619 START TEST thread_poller_perf 00:06:46.619 ************************************ 00:06:46.619 09:59:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:46.619 [2024-07-14 09:59:17.026080] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:46.619 [2024-07-14 09:59:17.026169] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70088 ] 00:06:46.619 [2024-07-14 09:59:17.162462] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.878 [2024-07-14 09:59:17.283064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.878 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:48.254 ====================================== 00:06:48.254 busy:2209861908 (cyc) 00:06:48.254 total_run_count: 373000 00:06:48.254 tsc_hz: 2200000000 (cyc) 00:06:48.254 ====================================== 00:06:48.254 poller_cost: 5924 (cyc), 2692 (nsec) 00:06:48.254 00:06:48.254 real 0m1.359s 00:06:48.254 user 0m1.184s 00:06:48.254 sys 0m0.068s 00:06:48.254 09:59:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.255 ************************************ 00:06:48.255 END TEST thread_poller_perf 00:06:48.255 ************************************ 00:06:48.255 09:59:18 -- common/autotest_common.sh@10 -- # set +x 00:06:48.255 09:59:18 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:48.255 09:59:18 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:48.255 09:59:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:48.255 09:59:18 -- common/autotest_common.sh@10 -- # set +x 00:06:48.255 ************************************ 00:06:48.255 START TEST thread_poller_perf 00:06:48.255 ************************************ 00:06:48.255 09:59:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:48.255 [2024-07-14 09:59:18.437410] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:48.255 [2024-07-14 09:59:18.437507] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70123 ] 00:06:48.255 [2024-07-14 09:59:18.567674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.255 [2024-07-14 09:59:18.678479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.255 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:49.631 ====================================== 00:06:49.631 busy:2203624142 (cyc) 00:06:49.631 total_run_count: 4903000 00:06:49.631 tsc_hz: 2200000000 (cyc) 00:06:49.631 ====================================== 00:06:49.631 poller_cost: 449 (cyc), 204 (nsec) 00:06:49.631 00:06:49.631 real 0m1.340s 00:06:49.631 user 0m1.167s 00:06:49.631 sys 0m0.065s 00:06:49.631 09:59:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.631 09:59:19 -- common/autotest_common.sh@10 -- # set +x 00:06:49.631 ************************************ 00:06:49.631 END TEST thread_poller_perf 00:06:49.631 ************************************ 00:06:49.631 09:59:19 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:49.631 00:06:49.631 real 0m2.885s 00:06:49.631 user 0m2.415s 00:06:49.631 sys 0m0.251s 00:06:49.631 09:59:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:49.631 09:59:19 -- common/autotest_common.sh@10 -- # set +x 00:06:49.631 ************************************ 00:06:49.631 END TEST thread 00:06:49.631 ************************************ 00:06:49.631 09:59:19 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:49.631 09:59:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:49.631 09:59:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:49.631 09:59:19 -- common/autotest_common.sh@10 -- # set +x 00:06:49.631 ************************************ 00:06:49.631 START TEST accel 00:06:49.631 ************************************ 00:06:49.631 09:59:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:49.631 * Looking for test storage... 00:06:49.631 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:49.631 09:59:19 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:49.631 09:59:19 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:49.631 09:59:19 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:49.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.631 09:59:19 -- accel/accel.sh@59 -- # spdk_tgt_pid=70191 00:06:49.631 09:59:19 -- accel/accel.sh@60 -- # waitforlisten 70191 00:06:49.631 09:59:19 -- common/autotest_common.sh@819 -- # '[' -z 70191 ']' 00:06:49.631 09:59:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.631 09:59:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:49.631 09:59:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.631 09:59:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:49.631 09:59:19 -- common/autotest_common.sh@10 -- # set +x 00:06:49.631 09:59:19 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:49.631 09:59:19 -- accel/accel.sh@58 -- # build_accel_config 00:06:49.631 09:59:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:49.631 09:59:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:49.631 09:59:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.631 09:59:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:49.631 09:59:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:49.631 09:59:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:49.631 09:59:19 -- accel/accel.sh@42 -- # jq -r . 00:06:49.631 [2024-07-14 09:59:20.012257] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:49.631 [2024-07-14 09:59:20.012347] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70191 ] 00:06:49.631 [2024-07-14 09:59:20.155855] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.890 [2024-07-14 09:59:20.264119] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:49.890 [2024-07-14 09:59:20.264381] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.456 09:59:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:50.456 09:59:20 -- common/autotest_common.sh@852 -- # return 0 00:06:50.456 09:59:20 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:50.456 09:59:20 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:50.456 09:59:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:50.456 09:59:20 -- common/autotest_common.sh@10 -- # set +x 00:06:50.456 09:59:20 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:50.716 09:59:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:50.716 09:59:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.716 09:59:21 -- accel/accel.sh@64 -- # IFS== 00:06:50.716 09:59:21 -- accel/accel.sh@64 -- # read -r opc module 00:06:50.716 09:59:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:50.716 09:59:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.716 09:59:21 -- accel/accel.sh@64 -- # IFS== 00:06:50.716 09:59:21 -- accel/accel.sh@64 -- # read -r opc module 00:06:50.716 09:59:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:50.716 09:59:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.716 09:59:21 -- accel/accel.sh@64 -- # IFS== 00:06:50.716 09:59:21 -- accel/accel.sh@64 -- # read -r opc module 00:06:50.716 09:59:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:50.716 09:59:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.716 09:59:21 -- accel/accel.sh@64 -- # IFS== 00:06:50.716 09:59:21 -- accel/accel.sh@64 -- # read -r opc module 00:06:50.716 09:59:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:50.716 09:59:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.716 09:59:21 -- accel/accel.sh@64 -- # IFS== 00:06:50.716 09:59:21 -- accel/accel.sh@64 -- # read -r opc module 00:06:50.716 09:59:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:50.716 09:59:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.716 09:59:21 -- accel/accel.sh@64 -- # IFS== 00:06:50.716 09:59:21 -- accel/accel.sh@64 -- # read -r opc module 00:06:50.716 09:59:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:50.716 09:59:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.716 09:59:21 -- accel/accel.sh@64 -- # IFS== 00:06:50.716 09:59:21 -- accel/accel.sh@64 -- # read -r opc module 00:06:50.716 09:59:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:50.716 09:59:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.716 09:59:21 -- accel/accel.sh@64 -- # IFS== 00:06:50.716 09:59:21 -- accel/accel.sh@64 -- # read -r opc module 00:06:50.716 09:59:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:50.716 09:59:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.716 09:59:21 -- accel/accel.sh@64 -- # IFS== 00:06:50.716 09:59:21 -- accel/accel.sh@64 -- # read -r opc module 00:06:50.716 09:59:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:50.716 09:59:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.716 09:59:21 -- accel/accel.sh@64 -- # IFS== 00:06:50.716 09:59:21 -- accel/accel.sh@64 -- # read -r opc module 00:06:50.716 09:59:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:50.716 09:59:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.716 09:59:21 -- accel/accel.sh@64 -- # IFS== 00:06:50.716 09:59:21 -- accel/accel.sh@64 -- # read -r opc module 00:06:50.716 09:59:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:50.716 09:59:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.716 09:59:21 -- accel/accel.sh@64 -- # IFS== 00:06:50.716 09:59:21 -- accel/accel.sh@64 -- # read -r opc module 00:06:50.716 09:59:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:50.716 09:59:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.716 09:59:21 -- accel/accel.sh@64 -- # IFS== 00:06:50.716 09:59:21 -- accel/accel.sh@64 -- # read -r opc module 00:06:50.716 09:59:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:50.716 09:59:21 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:50.716 09:59:21 -- accel/accel.sh@64 -- # IFS== 00:06:50.716 09:59:21 -- accel/accel.sh@64 -- # read -r opc module 00:06:50.716 09:59:21 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:50.716 09:59:21 -- accel/accel.sh@67 -- # killprocess 70191 00:06:50.716 09:59:21 -- common/autotest_common.sh@926 -- # '[' -z 70191 ']' 00:06:50.716 09:59:21 -- common/autotest_common.sh@930 -- # kill -0 70191 00:06:50.716 09:59:21 -- common/autotest_common.sh@931 -- # uname 00:06:50.716 09:59:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:50.716 09:59:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70191 00:06:50.716 killing process with pid 70191 00:06:50.716 09:59:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:50.716 09:59:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:50.716 09:59:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70191' 00:06:50.716 09:59:21 -- common/autotest_common.sh@945 -- # kill 70191 00:06:50.716 09:59:21 -- common/autotest_common.sh@950 -- # wait 70191 00:06:51.284 09:59:21 -- accel/accel.sh@68 -- # trap - ERR 00:06:51.284 09:59:21 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:51.284 09:59:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:51.284 09:59:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:51.284 09:59:21 -- common/autotest_common.sh@10 -- # set +x 00:06:51.284 09:59:21 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:51.284 09:59:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.284 09:59:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:51.284 09:59:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.284 09:59:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.284 09:59:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.284 09:59:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.284 09:59:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.284 09:59:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.284 09:59:21 -- accel/accel.sh@42 -- # jq -r . 00:06:51.284 09:59:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.284 09:59:21 -- common/autotest_common.sh@10 -- # set +x 00:06:51.284 09:59:21 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:51.284 09:59:21 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:51.284 09:59:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:51.284 09:59:21 -- common/autotest_common.sh@10 -- # set +x 00:06:51.284 ************************************ 00:06:51.284 START TEST accel_missing_filename 00:06:51.284 ************************************ 00:06:51.284 09:59:21 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:51.284 09:59:21 -- common/autotest_common.sh@640 -- # local es=0 00:06:51.284 09:59:21 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:51.284 09:59:21 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:51.284 09:59:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:51.284 09:59:21 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:51.284 09:59:21 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:51.284 09:59:21 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:51.284 09:59:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:51.284 09:59:21 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.284 09:59:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.284 09:59:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.284 09:59:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.284 09:59:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.284 09:59:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.284 09:59:21 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.284 09:59:21 -- accel/accel.sh@42 -- # jq -r . 00:06:51.284 [2024-07-14 09:59:21.691636] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:51.284 [2024-07-14 09:59:21.691738] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70268 ] 00:06:51.284 [2024-07-14 09:59:21.828702] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.545 [2024-07-14 09:59:21.924136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.545 [2024-07-14 09:59:22.006671] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:51.804 [2024-07-14 09:59:22.129585] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:51.804 A filename is required. 00:06:51.804 ************************************ 00:06:51.804 END TEST accel_missing_filename 00:06:51.804 ************************************ 00:06:51.804 09:59:22 -- common/autotest_common.sh@643 -- # es=234 00:06:51.804 09:59:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:51.804 09:59:22 -- common/autotest_common.sh@652 -- # es=106 00:06:51.804 09:59:22 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:51.804 09:59:22 -- common/autotest_common.sh@660 -- # es=1 00:06:51.804 09:59:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:51.804 00:06:51.804 real 0m0.565s 00:06:51.804 user 0m0.356s 00:06:51.804 sys 0m0.155s 00:06:51.804 09:59:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.804 09:59:22 -- common/autotest_common.sh@10 -- # set +x 00:06:51.804 09:59:22 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:51.804 09:59:22 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:51.804 09:59:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:51.804 09:59:22 -- common/autotest_common.sh@10 -- # set +x 00:06:51.804 ************************************ 00:06:51.804 START TEST accel_compress_verify 00:06:51.804 ************************************ 00:06:51.804 09:59:22 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:51.804 09:59:22 -- common/autotest_common.sh@640 -- # local es=0 00:06:51.804 09:59:22 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:51.804 09:59:22 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:51.805 09:59:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:51.805 09:59:22 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:51.805 09:59:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:51.805 09:59:22 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:51.805 09:59:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:51.805 09:59:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.805 09:59:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.805 09:59:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.805 09:59:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.805 09:59:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.805 09:59:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.805 09:59:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.805 09:59:22 -- accel/accel.sh@42 -- # jq -r . 00:06:51.805 [2024-07-14 09:59:22.311165] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:51.805 [2024-07-14 09:59:22.311303] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70297 ] 00:06:52.064 [2024-07-14 09:59:22.447717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.064 [2024-07-14 09:59:22.519407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.064 [2024-07-14 09:59:22.591959] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:52.323 [2024-07-14 09:59:22.698807] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:52.323 00:06:52.323 Compression does not support the verify option, aborting. 00:06:52.323 09:59:22 -- common/autotest_common.sh@643 -- # es=161 00:06:52.323 09:59:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:52.323 09:59:22 -- common/autotest_common.sh@652 -- # es=33 00:06:52.323 09:59:22 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:52.323 09:59:22 -- common/autotest_common.sh@660 -- # es=1 00:06:52.323 09:59:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:52.323 00:06:52.323 real 0m0.503s 00:06:52.323 user 0m0.308s 00:06:52.323 sys 0m0.140s 00:06:52.324 09:59:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.324 ************************************ 00:06:52.324 END TEST accel_compress_verify 00:06:52.324 ************************************ 00:06:52.324 09:59:22 -- common/autotest_common.sh@10 -- # set +x 00:06:52.324 09:59:22 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:52.324 09:59:22 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:52.324 09:59:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.324 09:59:22 -- common/autotest_common.sh@10 -- # set +x 00:06:52.324 ************************************ 00:06:52.324 START TEST accel_wrong_workload 00:06:52.324 ************************************ 00:06:52.324 09:59:22 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:52.324 09:59:22 -- common/autotest_common.sh@640 -- # local es=0 00:06:52.324 09:59:22 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:52.324 09:59:22 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:52.324 09:59:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:52.324 09:59:22 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:52.324 09:59:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:52.324 09:59:22 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:52.324 09:59:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:52.324 09:59:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.324 09:59:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.324 09:59:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.324 09:59:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.324 09:59:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.324 09:59:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.324 09:59:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.324 09:59:22 -- accel/accel.sh@42 -- # jq -r . 00:06:52.583 Unsupported workload type: foobar 00:06:52.583 [2024-07-14 09:59:22.873755] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:52.583 accel_perf options: 00:06:52.583 [-h help message] 00:06:52.583 [-q queue depth per core] 00:06:52.583 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:52.583 [-T number of threads per core 00:06:52.583 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:52.583 [-t time in seconds] 00:06:52.583 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:52.583 [ dif_verify, , dif_generate, dif_generate_copy 00:06:52.583 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:52.583 [-l for compress/decompress workloads, name of uncompressed input file 00:06:52.583 [-S for crc32c workload, use this seed value (default 0) 00:06:52.583 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:52.583 [-f for fill workload, use this BYTE value (default 255) 00:06:52.583 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:52.583 [-y verify result if this switch is on] 00:06:52.583 [-a tasks to allocate per core (default: same value as -q)] 00:06:52.583 Can be used to spread operations across a wider range of memory. 00:06:52.583 09:59:22 -- common/autotest_common.sh@643 -- # es=1 00:06:52.583 09:59:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:52.583 09:59:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:52.583 09:59:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:52.583 00:06:52.583 real 0m0.036s 00:06:52.583 user 0m0.018s 00:06:52.583 sys 0m0.017s 00:06:52.583 09:59:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.583 ************************************ 00:06:52.583 END TEST accel_wrong_workload 00:06:52.583 ************************************ 00:06:52.583 09:59:22 -- common/autotest_common.sh@10 -- # set +x 00:06:52.583 09:59:22 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:52.583 09:59:22 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:52.583 09:59:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.583 09:59:22 -- common/autotest_common.sh@10 -- # set +x 00:06:52.583 ************************************ 00:06:52.583 START TEST accel_negative_buffers 00:06:52.583 ************************************ 00:06:52.583 09:59:22 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:52.583 09:59:22 -- common/autotest_common.sh@640 -- # local es=0 00:06:52.583 09:59:22 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:52.583 09:59:22 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:52.583 09:59:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:52.583 09:59:22 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:52.583 09:59:22 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:52.583 09:59:22 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:52.583 09:59:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:52.583 09:59:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.583 09:59:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.583 09:59:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.583 09:59:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.583 09:59:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.583 09:59:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.583 09:59:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.583 09:59:22 -- accel/accel.sh@42 -- # jq -r . 00:06:52.583 -x option must be non-negative. 00:06:52.583 [2024-07-14 09:59:22.964090] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:52.583 accel_perf options: 00:06:52.583 [-h help message] 00:06:52.583 [-q queue depth per core] 00:06:52.583 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:52.583 [-T number of threads per core 00:06:52.583 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:52.583 [-t time in seconds] 00:06:52.583 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:52.583 [ dif_verify, , dif_generate, dif_generate_copy 00:06:52.583 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:52.583 [-l for compress/decompress workloads, name of uncompressed input file 00:06:52.583 [-S for crc32c workload, use this seed value (default 0) 00:06:52.583 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:52.583 [-f for fill workload, use this BYTE value (default 255) 00:06:52.583 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:52.583 [-y verify result if this switch is on] 00:06:52.583 [-a tasks to allocate per core (default: same value as -q)] 00:06:52.583 Can be used to spread operations across a wider range of memory. 00:06:52.583 09:59:22 -- common/autotest_common.sh@643 -- # es=1 00:06:52.583 09:59:22 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:52.583 09:59:22 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:52.583 09:59:22 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:52.583 00:06:52.583 real 0m0.034s 00:06:52.583 user 0m0.018s 00:06:52.583 sys 0m0.016s 00:06:52.583 09:59:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.583 ************************************ 00:06:52.583 END TEST accel_negative_buffers 00:06:52.583 ************************************ 00:06:52.583 09:59:22 -- common/autotest_common.sh@10 -- # set +x 00:06:52.583 09:59:23 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:52.583 09:59:23 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:52.583 09:59:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:52.583 09:59:23 -- common/autotest_common.sh@10 -- # set +x 00:06:52.583 ************************************ 00:06:52.583 START TEST accel_crc32c 00:06:52.583 ************************************ 00:06:52.583 09:59:23 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:52.583 09:59:23 -- accel/accel.sh@16 -- # local accel_opc 00:06:52.583 09:59:23 -- accel/accel.sh@17 -- # local accel_module 00:06:52.583 09:59:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:52.583 09:59:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:52.583 09:59:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.583 09:59:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.583 09:59:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.583 09:59:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.583 09:59:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.583 09:59:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.583 09:59:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.583 09:59:23 -- accel/accel.sh@42 -- # jq -r . 00:06:52.583 [2024-07-14 09:59:23.042143] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:52.583 [2024-07-14 09:59:23.042238] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70351 ] 00:06:52.850 [2024-07-14 09:59:23.177455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.850 [2024-07-14 09:59:23.274360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.224 09:59:24 -- accel/accel.sh@18 -- # out=' 00:06:54.224 SPDK Configuration: 00:06:54.224 Core mask: 0x1 00:06:54.224 00:06:54.224 Accel Perf Configuration: 00:06:54.224 Workload Type: crc32c 00:06:54.224 CRC-32C seed: 32 00:06:54.224 Transfer size: 4096 bytes 00:06:54.224 Vector count 1 00:06:54.224 Module: software 00:06:54.224 Queue depth: 32 00:06:54.224 Allocate depth: 32 00:06:54.224 # threads/core: 1 00:06:54.224 Run time: 1 seconds 00:06:54.224 Verify: Yes 00:06:54.224 00:06:54.224 Running for 1 seconds... 00:06:54.224 00:06:54.224 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:54.224 ------------------------------------------------------------------------------------ 00:06:54.224 0,0 521664/s 2037 MiB/s 0 0 00:06:54.224 ==================================================================================== 00:06:54.224 Total 521664/s 2037 MiB/s 0 0' 00:06:54.224 09:59:24 -- accel/accel.sh@20 -- # IFS=: 00:06:54.224 09:59:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:54.224 09:59:24 -- accel/accel.sh@20 -- # read -r var val 00:06:54.224 09:59:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:54.224 09:59:24 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.224 09:59:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.224 09:59:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.224 09:59:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.224 09:59:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.224 09:59:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.224 09:59:24 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.224 09:59:24 -- accel/accel.sh@42 -- # jq -r . 00:06:54.224 [2024-07-14 09:59:24.596246] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:54.224 [2024-07-14 09:59:24.596375] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70376 ] 00:06:54.224 [2024-07-14 09:59:24.735489] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.482 [2024-07-14 09:59:24.845080] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.482 09:59:24 -- accel/accel.sh@21 -- # val= 00:06:54.482 09:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # IFS=: 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # read -r var val 00:06:54.482 09:59:24 -- accel/accel.sh@21 -- # val= 00:06:54.482 09:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # IFS=: 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # read -r var val 00:06:54.482 09:59:24 -- accel/accel.sh@21 -- # val=0x1 00:06:54.482 09:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # IFS=: 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # read -r var val 00:06:54.482 09:59:24 -- accel/accel.sh@21 -- # val= 00:06:54.482 09:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # IFS=: 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # read -r var val 00:06:54.482 09:59:24 -- accel/accel.sh@21 -- # val= 00:06:54.482 09:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # IFS=: 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # read -r var val 00:06:54.482 09:59:24 -- accel/accel.sh@21 -- # val=crc32c 00:06:54.482 09:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.482 09:59:24 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # IFS=: 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # read -r var val 00:06:54.482 09:59:24 -- accel/accel.sh@21 -- # val=32 00:06:54.482 09:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # IFS=: 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # read -r var val 00:06:54.482 09:59:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:54.482 09:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # IFS=: 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # read -r var val 00:06:54.482 09:59:24 -- accel/accel.sh@21 -- # val= 00:06:54.482 09:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # IFS=: 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # read -r var val 00:06:54.482 09:59:24 -- accel/accel.sh@21 -- # val=software 00:06:54.482 09:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.482 09:59:24 -- accel/accel.sh@23 -- # accel_module=software 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # IFS=: 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # read -r var val 00:06:54.482 09:59:24 -- accel/accel.sh@21 -- # val=32 00:06:54.482 09:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # IFS=: 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # read -r var val 00:06:54.482 09:59:24 -- accel/accel.sh@21 -- # val=32 00:06:54.482 09:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # IFS=: 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # read -r var val 00:06:54.482 09:59:24 -- accel/accel.sh@21 -- # val=1 00:06:54.482 09:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # IFS=: 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # read -r var val 00:06:54.482 09:59:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:54.482 09:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # IFS=: 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # read -r var val 00:06:54.482 09:59:24 -- accel/accel.sh@21 -- # val=Yes 00:06:54.482 09:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # IFS=: 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # read -r var val 00:06:54.482 09:59:24 -- accel/accel.sh@21 -- # val= 00:06:54.482 09:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # IFS=: 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # read -r var val 00:06:54.482 09:59:24 -- accel/accel.sh@21 -- # val= 00:06:54.482 09:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # IFS=: 00:06:54.482 09:59:24 -- accel/accel.sh@20 -- # read -r var val 00:06:55.853 09:59:26 -- accel/accel.sh@21 -- # val= 00:06:55.854 09:59:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.854 09:59:26 -- accel/accel.sh@20 -- # IFS=: 00:06:55.854 09:59:26 -- accel/accel.sh@20 -- # read -r var val 00:06:55.854 09:59:26 -- accel/accel.sh@21 -- # val= 00:06:55.854 09:59:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.854 09:59:26 -- accel/accel.sh@20 -- # IFS=: 00:06:55.854 09:59:26 -- accel/accel.sh@20 -- # read -r var val 00:06:55.854 09:59:26 -- accel/accel.sh@21 -- # val= 00:06:55.854 09:59:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.854 09:59:26 -- accel/accel.sh@20 -- # IFS=: 00:06:55.854 09:59:26 -- accel/accel.sh@20 -- # read -r var val 00:06:55.854 09:59:26 -- accel/accel.sh@21 -- # val= 00:06:55.854 09:59:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.854 09:59:26 -- accel/accel.sh@20 -- # IFS=: 00:06:55.854 09:59:26 -- accel/accel.sh@20 -- # read -r var val 00:06:55.854 09:59:26 -- accel/accel.sh@21 -- # val= 00:06:55.854 09:59:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.854 09:59:26 -- accel/accel.sh@20 -- # IFS=: 00:06:55.854 09:59:26 -- accel/accel.sh@20 -- # read -r var val 00:06:55.854 09:59:26 -- accel/accel.sh@21 -- # val= 00:06:55.854 09:59:26 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.854 09:59:26 -- accel/accel.sh@20 -- # IFS=: 00:06:55.854 09:59:26 -- accel/accel.sh@20 -- # read -r var val 00:06:55.854 09:59:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:55.854 09:59:26 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:55.854 09:59:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.854 00:06:55.854 real 0m3.128s 00:06:55.854 user 0m2.613s 00:06:55.854 sys 0m0.306s 00:06:55.854 09:59:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.854 ************************************ 00:06:55.854 END TEST accel_crc32c 00:06:55.854 ************************************ 00:06:55.854 09:59:26 -- common/autotest_common.sh@10 -- # set +x 00:06:55.854 09:59:26 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:55.854 09:59:26 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:55.854 09:59:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:55.854 09:59:26 -- common/autotest_common.sh@10 -- # set +x 00:06:55.854 ************************************ 00:06:55.854 START TEST accel_crc32c_C2 00:06:55.854 ************************************ 00:06:55.854 09:59:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:55.854 09:59:26 -- accel/accel.sh@16 -- # local accel_opc 00:06:55.854 09:59:26 -- accel/accel.sh@17 -- # local accel_module 00:06:55.854 09:59:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:55.854 09:59:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:55.854 09:59:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.854 09:59:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.854 09:59:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.854 09:59:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.854 09:59:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.854 09:59:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.854 09:59:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.854 09:59:26 -- accel/accel.sh@42 -- # jq -r . 00:06:55.854 [2024-07-14 09:59:26.228226] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:55.854 [2024-07-14 09:59:26.228341] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70411 ] 00:06:55.854 [2024-07-14 09:59:26.365173] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.112 [2024-07-14 09:59:26.491798] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.661 09:59:27 -- accel/accel.sh@18 -- # out=' 00:06:57.661 SPDK Configuration: 00:06:57.661 Core mask: 0x1 00:06:57.661 00:06:57.661 Accel Perf Configuration: 00:06:57.661 Workload Type: crc32c 00:06:57.661 CRC-32C seed: 0 00:06:57.661 Transfer size: 4096 bytes 00:06:57.661 Vector count 2 00:06:57.661 Module: software 00:06:57.661 Queue depth: 32 00:06:57.661 Allocate depth: 32 00:06:57.661 # threads/core: 1 00:06:57.661 Run time: 1 seconds 00:06:57.661 Verify: Yes 00:06:57.661 00:06:57.661 Running for 1 seconds... 00:06:57.661 00:06:57.661 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:57.661 ------------------------------------------------------------------------------------ 00:06:57.661 0,0 394752/s 3084 MiB/s 0 0 00:06:57.661 ==================================================================================== 00:06:57.661 Total 394752/s 1542 MiB/s 0 0' 00:06:57.661 09:59:27 -- accel/accel.sh@20 -- # IFS=: 00:06:57.661 09:59:27 -- accel/accel.sh@20 -- # read -r var val 00:06:57.661 09:59:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:57.661 09:59:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:57.661 09:59:27 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.661 09:59:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.661 09:59:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.661 09:59:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.661 09:59:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.661 09:59:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.661 09:59:27 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.661 09:59:27 -- accel/accel.sh@42 -- # jq -r . 00:06:57.661 [2024-07-14 09:59:27.805377] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:57.661 [2024-07-14 09:59:27.805491] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70430 ] 00:06:57.661 [2024-07-14 09:59:27.940836] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.661 [2024-07-14 09:59:28.047629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.661 09:59:28 -- accel/accel.sh@21 -- # val= 00:06:57.661 09:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.661 09:59:28 -- accel/accel.sh@21 -- # val= 00:06:57.661 09:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.661 09:59:28 -- accel/accel.sh@21 -- # val=0x1 00:06:57.661 09:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.661 09:59:28 -- accel/accel.sh@21 -- # val= 00:06:57.661 09:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.661 09:59:28 -- accel/accel.sh@21 -- # val= 00:06:57.661 09:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.661 09:59:28 -- accel/accel.sh@21 -- # val=crc32c 00:06:57.661 09:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.661 09:59:28 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.661 09:59:28 -- accel/accel.sh@21 -- # val=0 00:06:57.661 09:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.661 09:59:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:57.661 09:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.661 09:59:28 -- accel/accel.sh@21 -- # val= 00:06:57.661 09:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.661 09:59:28 -- accel/accel.sh@21 -- # val=software 00:06:57.661 09:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.661 09:59:28 -- accel/accel.sh@23 -- # accel_module=software 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.661 09:59:28 -- accel/accel.sh@21 -- # val=32 00:06:57.661 09:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.661 09:59:28 -- accel/accel.sh@21 -- # val=32 00:06:57.661 09:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.661 09:59:28 -- accel/accel.sh@21 -- # val=1 00:06:57.661 09:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.661 09:59:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:57.661 09:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.661 09:59:28 -- accel/accel.sh@21 -- # val=Yes 00:06:57.661 09:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.661 09:59:28 -- accel/accel.sh@21 -- # val= 00:06:57.661 09:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:57.661 09:59:28 -- accel/accel.sh@21 -- # val= 00:06:57.661 09:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # IFS=: 00:06:57.661 09:59:28 -- accel/accel.sh@20 -- # read -r var val 00:06:59.032 09:59:29 -- accel/accel.sh@21 -- # val= 00:06:59.032 09:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.032 09:59:29 -- accel/accel.sh@20 -- # IFS=: 00:06:59.032 09:59:29 -- accel/accel.sh@20 -- # read -r var val 00:06:59.032 09:59:29 -- accel/accel.sh@21 -- # val= 00:06:59.032 09:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.032 09:59:29 -- accel/accel.sh@20 -- # IFS=: 00:06:59.032 09:59:29 -- accel/accel.sh@20 -- # read -r var val 00:06:59.032 09:59:29 -- accel/accel.sh@21 -- # val= 00:06:59.032 09:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.032 09:59:29 -- accel/accel.sh@20 -- # IFS=: 00:06:59.032 09:59:29 -- accel/accel.sh@20 -- # read -r var val 00:06:59.032 09:59:29 -- accel/accel.sh@21 -- # val= 00:06:59.032 09:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.032 09:59:29 -- accel/accel.sh@20 -- # IFS=: 00:06:59.032 09:59:29 -- accel/accel.sh@20 -- # read -r var val 00:06:59.032 09:59:29 -- accel/accel.sh@21 -- # val= 00:06:59.032 09:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.032 09:59:29 -- accel/accel.sh@20 -- # IFS=: 00:06:59.032 09:59:29 -- accel/accel.sh@20 -- # read -r var val 00:06:59.032 09:59:29 -- accel/accel.sh@21 -- # val= 00:06:59.032 09:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.032 09:59:29 -- accel/accel.sh@20 -- # IFS=: 00:06:59.032 09:59:29 -- accel/accel.sh@20 -- # read -r var val 00:06:59.032 09:59:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:59.032 09:59:29 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:59.032 09:59:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.032 00:06:59.032 real 0m3.134s 00:06:59.032 user 0m2.635s 00:06:59.032 sys 0m0.292s 00:06:59.032 09:59:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.032 09:59:29 -- common/autotest_common.sh@10 -- # set +x 00:06:59.032 ************************************ 00:06:59.032 END TEST accel_crc32c_C2 00:06:59.032 ************************************ 00:06:59.032 09:59:29 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:59.032 09:59:29 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:59.032 09:59:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:59.032 09:59:29 -- common/autotest_common.sh@10 -- # set +x 00:06:59.032 ************************************ 00:06:59.032 START TEST accel_copy 00:06:59.032 ************************************ 00:06:59.032 09:59:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:59.032 09:59:29 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.032 09:59:29 -- accel/accel.sh@17 -- # local accel_module 00:06:59.032 09:59:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:59.032 09:59:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:59.032 09:59:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.032 09:59:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.032 09:59:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.032 09:59:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.032 09:59:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.032 09:59:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.032 09:59:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.032 09:59:29 -- accel/accel.sh@42 -- # jq -r . 00:06:59.032 [2024-07-14 09:59:29.410525] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:06:59.032 [2024-07-14 09:59:29.410604] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70465 ] 00:06:59.032 [2024-07-14 09:59:29.540961] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.289 [2024-07-14 09:59:29.638070] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.661 09:59:30 -- accel/accel.sh@18 -- # out=' 00:07:00.661 SPDK Configuration: 00:07:00.661 Core mask: 0x1 00:07:00.661 00:07:00.661 Accel Perf Configuration: 00:07:00.661 Workload Type: copy 00:07:00.661 Transfer size: 4096 bytes 00:07:00.661 Vector count 1 00:07:00.661 Module: software 00:07:00.661 Queue depth: 32 00:07:00.661 Allocate depth: 32 00:07:00.661 # threads/core: 1 00:07:00.661 Run time: 1 seconds 00:07:00.661 Verify: Yes 00:07:00.661 00:07:00.661 Running for 1 seconds... 00:07:00.661 00:07:00.661 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:00.661 ------------------------------------------------------------------------------------ 00:07:00.661 0,0 349824/s 1366 MiB/s 0 0 00:07:00.661 ==================================================================================== 00:07:00.661 Total 349824/s 1366 MiB/s 0 0' 00:07:00.661 09:59:30 -- accel/accel.sh@20 -- # IFS=: 00:07:00.661 09:59:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:00.661 09:59:30 -- accel/accel.sh@20 -- # read -r var val 00:07:00.661 09:59:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:00.661 09:59:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.661 09:59:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.661 09:59:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.661 09:59:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.661 09:59:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.661 09:59:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.661 09:59:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.661 09:59:30 -- accel/accel.sh@42 -- # jq -r . 00:07:00.661 [2024-07-14 09:59:30.929436] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:00.661 [2024-07-14 09:59:30.929573] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70484 ] 00:07:00.661 [2024-07-14 09:59:31.075419] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.661 [2024-07-14 09:59:31.134272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.661 09:59:31 -- accel/accel.sh@21 -- # val= 00:07:00.661 09:59:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.661 09:59:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.661 09:59:31 -- accel/accel.sh@20 -- # read -r var val 00:07:00.661 09:59:31 -- accel/accel.sh@21 -- # val= 00:07:00.661 09:59:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.661 09:59:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.661 09:59:31 -- accel/accel.sh@20 -- # read -r var val 00:07:00.661 09:59:31 -- accel/accel.sh@21 -- # val=0x1 00:07:00.661 09:59:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.661 09:59:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.661 09:59:31 -- accel/accel.sh@20 -- # read -r var val 00:07:00.661 09:59:31 -- accel/accel.sh@21 -- # val= 00:07:00.661 09:59:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.661 09:59:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.919 09:59:31 -- accel/accel.sh@20 -- # read -r var val 00:07:00.919 09:59:31 -- accel/accel.sh@21 -- # val= 00:07:00.919 09:59:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.919 09:59:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.919 09:59:31 -- accel/accel.sh@20 -- # read -r var val 00:07:00.919 09:59:31 -- accel/accel.sh@21 -- # val=copy 00:07:00.919 09:59:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.919 09:59:31 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:00.919 09:59:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.919 09:59:31 -- accel/accel.sh@20 -- # read -r var val 00:07:00.919 09:59:31 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:00.919 09:59:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.919 09:59:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.919 09:59:31 -- accel/accel.sh@20 -- # read -r var val 00:07:00.919 09:59:31 -- accel/accel.sh@21 -- # val= 00:07:00.919 09:59:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.919 09:59:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.919 09:59:31 -- accel/accel.sh@20 -- # read -r var val 00:07:00.919 09:59:31 -- accel/accel.sh@21 -- # val=software 00:07:00.919 09:59:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.919 09:59:31 -- accel/accel.sh@23 -- # accel_module=software 00:07:00.919 09:59:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.919 09:59:31 -- accel/accel.sh@20 -- # read -r var val 00:07:00.919 09:59:31 -- accel/accel.sh@21 -- # val=32 00:07:00.919 09:59:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.919 09:59:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.919 09:59:31 -- accel/accel.sh@20 -- # read -r var val 00:07:00.919 09:59:31 -- accel/accel.sh@21 -- # val=32 00:07:00.919 09:59:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.919 09:59:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.919 09:59:31 -- accel/accel.sh@20 -- # read -r var val 00:07:00.919 09:59:31 -- accel/accel.sh@21 -- # val=1 00:07:00.919 09:59:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.919 09:59:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.919 09:59:31 -- accel/accel.sh@20 -- # read -r var val 00:07:00.919 09:59:31 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:00.919 09:59:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.919 09:59:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.919 09:59:31 -- accel/accel.sh@20 -- # read -r var val 00:07:00.919 09:59:31 -- accel/accel.sh@21 -- # val=Yes 00:07:00.919 09:59:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.919 09:59:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.919 09:59:31 -- accel/accel.sh@20 -- # read -r var val 00:07:00.919 09:59:31 -- accel/accel.sh@21 -- # val= 00:07:00.919 09:59:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.919 09:59:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.919 09:59:31 -- accel/accel.sh@20 -- # read -r var val 00:07:00.920 09:59:31 -- accel/accel.sh@21 -- # val= 00:07:00.920 09:59:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.920 09:59:31 -- accel/accel.sh@20 -- # IFS=: 00:07:00.920 09:59:31 -- accel/accel.sh@20 -- # read -r var val 00:07:01.852 09:59:32 -- accel/accel.sh@21 -- # val= 00:07:01.852 09:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.852 09:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:01.852 09:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:01.852 09:59:32 -- accel/accel.sh@21 -- # val= 00:07:01.852 09:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.852 09:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:01.852 09:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:01.852 09:59:32 -- accel/accel.sh@21 -- # val= 00:07:01.852 09:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.852 09:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:01.852 09:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:01.852 09:59:32 -- accel/accel.sh@21 -- # val= 00:07:01.852 09:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.852 09:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:01.852 09:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:01.852 09:59:32 -- accel/accel.sh@21 -- # val= 00:07:01.852 09:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.852 09:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:01.852 09:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:01.852 09:59:32 -- accel/accel.sh@21 -- # val= 00:07:01.852 09:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.852 09:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:01.852 09:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:01.853 09:59:32 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:01.853 09:59:32 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:01.853 09:59:32 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.853 00:07:01.853 real 0m3.003s 00:07:01.853 user 0m2.513s 00:07:01.853 sys 0m0.286s 00:07:01.853 09:59:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.853 ************************************ 00:07:01.853 END TEST accel_copy 00:07:01.853 ************************************ 00:07:01.853 09:59:32 -- common/autotest_common.sh@10 -- # set +x 00:07:02.111 09:59:32 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:02.111 09:59:32 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:02.111 09:59:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:02.111 09:59:32 -- common/autotest_common.sh@10 -- # set +x 00:07:02.111 ************************************ 00:07:02.111 START TEST accel_fill 00:07:02.111 ************************************ 00:07:02.111 09:59:32 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:02.111 09:59:32 -- accel/accel.sh@16 -- # local accel_opc 00:07:02.111 09:59:32 -- accel/accel.sh@17 -- # local accel_module 00:07:02.111 09:59:32 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:02.111 09:59:32 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.111 09:59:32 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:02.111 09:59:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.111 09:59:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.111 09:59:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.111 09:59:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.111 09:59:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.111 09:59:32 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.111 09:59:32 -- accel/accel.sh@42 -- # jq -r . 00:07:02.111 [2024-07-14 09:59:32.469348] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:02.111 [2024-07-14 09:59:32.469436] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70519 ] 00:07:02.111 [2024-07-14 09:59:32.608040] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.369 [2024-07-14 09:59:32.702746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.756 09:59:33 -- accel/accel.sh@18 -- # out=' 00:07:03.756 SPDK Configuration: 00:07:03.756 Core mask: 0x1 00:07:03.756 00:07:03.756 Accel Perf Configuration: 00:07:03.756 Workload Type: fill 00:07:03.756 Fill pattern: 0x80 00:07:03.756 Transfer size: 4096 bytes 00:07:03.756 Vector count 1 00:07:03.756 Module: software 00:07:03.756 Queue depth: 64 00:07:03.756 Allocate depth: 64 00:07:03.756 # threads/core: 1 00:07:03.756 Run time: 1 seconds 00:07:03.756 Verify: Yes 00:07:03.756 00:07:03.756 Running for 1 seconds... 00:07:03.756 00:07:03.756 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:03.756 ------------------------------------------------------------------------------------ 00:07:03.756 0,0 507776/s 1983 MiB/s 0 0 00:07:03.756 ==================================================================================== 00:07:03.756 Total 507776/s 1983 MiB/s 0 0' 00:07:03.756 09:59:33 -- accel/accel.sh@20 -- # IFS=: 00:07:03.756 09:59:33 -- accel/accel.sh@20 -- # read -r var val 00:07:03.756 09:59:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:03.756 09:59:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:03.756 09:59:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.756 09:59:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.756 09:59:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.756 09:59:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.756 09:59:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.756 09:59:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.756 09:59:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.756 09:59:33 -- accel/accel.sh@42 -- # jq -r . 00:07:03.756 [2024-07-14 09:59:34.021931] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:03.756 [2024-07-14 09:59:34.022259] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70538 ] 00:07:03.756 [2024-07-14 09:59:34.165044] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.756 [2024-07-14 09:59:34.233373] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.013 09:59:34 -- accel/accel.sh@21 -- # val= 00:07:04.013 09:59:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.013 09:59:34 -- accel/accel.sh@20 -- # IFS=: 00:07:04.013 09:59:34 -- accel/accel.sh@20 -- # read -r var val 00:07:04.013 09:59:34 -- accel/accel.sh@21 -- # val= 00:07:04.013 09:59:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.013 09:59:34 -- accel/accel.sh@20 -- # IFS=: 00:07:04.013 09:59:34 -- accel/accel.sh@20 -- # read -r var val 00:07:04.013 09:59:34 -- accel/accel.sh@21 -- # val=0x1 00:07:04.013 09:59:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.013 09:59:34 -- accel/accel.sh@20 -- # IFS=: 00:07:04.013 09:59:34 -- accel/accel.sh@20 -- # read -r var val 00:07:04.013 09:59:34 -- accel/accel.sh@21 -- # val= 00:07:04.013 09:59:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.013 09:59:34 -- accel/accel.sh@20 -- # IFS=: 00:07:04.013 09:59:34 -- accel/accel.sh@20 -- # read -r var val 00:07:04.013 09:59:34 -- accel/accel.sh@21 -- # val= 00:07:04.013 09:59:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.013 09:59:34 -- accel/accel.sh@20 -- # IFS=: 00:07:04.013 09:59:34 -- accel/accel.sh@20 -- # read -r var val 00:07:04.013 09:59:34 -- accel/accel.sh@21 -- # val=fill 00:07:04.013 09:59:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.013 09:59:34 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:04.013 09:59:34 -- accel/accel.sh@20 -- # IFS=: 00:07:04.013 09:59:34 -- accel/accel.sh@20 -- # read -r var val 00:07:04.013 09:59:34 -- accel/accel.sh@21 -- # val=0x80 00:07:04.013 09:59:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.013 09:59:34 -- accel/accel.sh@20 -- # IFS=: 00:07:04.013 09:59:34 -- accel/accel.sh@20 -- # read -r var val 00:07:04.013 09:59:34 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:04.013 09:59:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.013 09:59:34 -- accel/accel.sh@20 -- # IFS=: 00:07:04.013 09:59:34 -- accel/accel.sh@20 -- # read -r var val 00:07:04.013 09:59:34 -- accel/accel.sh@21 -- # val= 00:07:04.013 09:59:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.013 09:59:34 -- accel/accel.sh@20 -- # IFS=: 00:07:04.013 09:59:34 -- accel/accel.sh@20 -- # read -r var val 00:07:04.013 09:59:34 -- accel/accel.sh@21 -- # val=software 00:07:04.013 09:59:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.013 09:59:34 -- accel/accel.sh@23 -- # accel_module=software 00:07:04.013 09:59:34 -- accel/accel.sh@20 -- # IFS=: 00:07:04.013 09:59:34 -- accel/accel.sh@20 -- # read -r var val 00:07:04.013 09:59:34 -- accel/accel.sh@21 -- # val=64 00:07:04.013 09:59:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.013 09:59:34 -- accel/accel.sh@20 -- # IFS=: 00:07:04.013 09:59:34 -- accel/accel.sh@20 -- # read -r var val 00:07:04.013 09:59:34 -- accel/accel.sh@21 -- # val=64 00:07:04.013 09:59:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.013 09:59:34 -- accel/accel.sh@20 -- # IFS=: 00:07:04.014 09:59:34 -- accel/accel.sh@20 -- # read -r var val 00:07:04.014 09:59:34 -- accel/accel.sh@21 -- # val=1 00:07:04.014 09:59:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.014 09:59:34 -- accel/accel.sh@20 -- # IFS=: 00:07:04.014 09:59:34 -- accel/accel.sh@20 -- # read -r var val 00:07:04.014 09:59:34 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:04.014 09:59:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.014 09:59:34 -- accel/accel.sh@20 -- # IFS=: 00:07:04.014 09:59:34 -- accel/accel.sh@20 -- # read -r var val 00:07:04.014 09:59:34 -- accel/accel.sh@21 -- # val=Yes 00:07:04.014 09:59:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.014 09:59:34 -- accel/accel.sh@20 -- # IFS=: 00:07:04.014 09:59:34 -- accel/accel.sh@20 -- # read -r var val 00:07:04.014 09:59:34 -- accel/accel.sh@21 -- # val= 00:07:04.014 09:59:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.014 09:59:34 -- accel/accel.sh@20 -- # IFS=: 00:07:04.014 09:59:34 -- accel/accel.sh@20 -- # read -r var val 00:07:04.014 09:59:34 -- accel/accel.sh@21 -- # val= 00:07:04.014 09:59:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.014 09:59:34 -- accel/accel.sh@20 -- # IFS=: 00:07:04.014 09:59:34 -- accel/accel.sh@20 -- # read -r var val 00:07:05.386 09:59:35 -- accel/accel.sh@21 -- # val= 00:07:05.386 09:59:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.386 09:59:35 -- accel/accel.sh@20 -- # IFS=: 00:07:05.386 09:59:35 -- accel/accel.sh@20 -- # read -r var val 00:07:05.386 09:59:35 -- accel/accel.sh@21 -- # val= 00:07:05.387 09:59:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.387 09:59:35 -- accel/accel.sh@20 -- # IFS=: 00:07:05.387 09:59:35 -- accel/accel.sh@20 -- # read -r var val 00:07:05.387 09:59:35 -- accel/accel.sh@21 -- # val= 00:07:05.387 09:59:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.387 09:59:35 -- accel/accel.sh@20 -- # IFS=: 00:07:05.387 09:59:35 -- accel/accel.sh@20 -- # read -r var val 00:07:05.387 09:59:35 -- accel/accel.sh@21 -- # val= 00:07:05.387 09:59:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.387 09:59:35 -- accel/accel.sh@20 -- # IFS=: 00:07:05.387 09:59:35 -- accel/accel.sh@20 -- # read -r var val 00:07:05.387 09:59:35 -- accel/accel.sh@21 -- # val= 00:07:05.387 09:59:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.387 09:59:35 -- accel/accel.sh@20 -- # IFS=: 00:07:05.387 09:59:35 -- accel/accel.sh@20 -- # read -r var val 00:07:05.387 09:59:35 -- accel/accel.sh@21 -- # val= 00:07:05.387 09:59:35 -- accel/accel.sh@22 -- # case "$var" in 00:07:05.387 09:59:35 -- accel/accel.sh@20 -- # IFS=: 00:07:05.387 09:59:35 -- accel/accel.sh@20 -- # read -r var val 00:07:05.387 09:59:35 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:05.387 09:59:35 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:05.387 09:59:35 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.387 00:07:05.387 real 0m3.061s 00:07:05.387 user 0m2.549s 00:07:05.387 sys 0m0.304s 00:07:05.387 ************************************ 00:07:05.387 END TEST accel_fill 00:07:05.387 ************************************ 00:07:05.387 09:59:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.387 09:59:35 -- common/autotest_common.sh@10 -- # set +x 00:07:05.387 09:59:35 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:05.387 09:59:35 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:05.387 09:59:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:05.387 09:59:35 -- common/autotest_common.sh@10 -- # set +x 00:07:05.387 ************************************ 00:07:05.387 START TEST accel_copy_crc32c 00:07:05.387 ************************************ 00:07:05.387 09:59:35 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:07:05.387 09:59:35 -- accel/accel.sh@16 -- # local accel_opc 00:07:05.387 09:59:35 -- accel/accel.sh@17 -- # local accel_module 00:07:05.387 09:59:35 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:05.387 09:59:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:05.387 09:59:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:05.387 09:59:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:05.387 09:59:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.387 09:59:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.387 09:59:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:05.387 09:59:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:05.387 09:59:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:05.387 09:59:35 -- accel/accel.sh@42 -- # jq -r . 00:07:05.387 [2024-07-14 09:59:35.581565] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:05.387 [2024-07-14 09:59:35.581673] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70573 ] 00:07:05.387 [2024-07-14 09:59:35.715942] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.387 [2024-07-14 09:59:35.800193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.763 09:59:37 -- accel/accel.sh@18 -- # out=' 00:07:06.763 SPDK Configuration: 00:07:06.763 Core mask: 0x1 00:07:06.763 00:07:06.763 Accel Perf Configuration: 00:07:06.763 Workload Type: copy_crc32c 00:07:06.763 CRC-32C seed: 0 00:07:06.763 Vector size: 4096 bytes 00:07:06.763 Transfer size: 4096 bytes 00:07:06.763 Vector count 1 00:07:06.763 Module: software 00:07:06.763 Queue depth: 32 00:07:06.763 Allocate depth: 32 00:07:06.763 # threads/core: 1 00:07:06.763 Run time: 1 seconds 00:07:06.763 Verify: Yes 00:07:06.763 00:07:06.763 Running for 1 seconds... 00:07:06.763 00:07:06.763 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:06.763 ------------------------------------------------------------------------------------ 00:07:06.763 0,0 275616/s 1076 MiB/s 0 0 00:07:06.763 ==================================================================================== 00:07:06.763 Total 275616/s 1076 MiB/s 0 0' 00:07:06.763 09:59:37 -- accel/accel.sh@20 -- # IFS=: 00:07:06.763 09:59:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:06.763 09:59:37 -- accel/accel.sh@20 -- # read -r var val 00:07:06.763 09:59:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.763 09:59:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:06.763 09:59:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.763 09:59:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.763 09:59:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.763 09:59:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.763 09:59:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.763 09:59:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.763 09:59:37 -- accel/accel.sh@42 -- # jq -r . 00:07:06.763 [2024-07-14 09:59:37.119167] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:06.763 [2024-07-14 09:59:37.119309] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70592 ] 00:07:06.763 [2024-07-14 09:59:37.250197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.021 [2024-07-14 09:59:37.355988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.021 09:59:37 -- accel/accel.sh@21 -- # val= 00:07:07.021 09:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.021 09:59:37 -- accel/accel.sh@20 -- # IFS=: 00:07:07.021 09:59:37 -- accel/accel.sh@20 -- # read -r var val 00:07:07.021 09:59:37 -- accel/accel.sh@21 -- # val= 00:07:07.021 09:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.021 09:59:37 -- accel/accel.sh@20 -- # IFS=: 00:07:07.021 09:59:37 -- accel/accel.sh@20 -- # read -r var val 00:07:07.021 09:59:37 -- accel/accel.sh@21 -- # val=0x1 00:07:07.021 09:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.021 09:59:37 -- accel/accel.sh@20 -- # IFS=: 00:07:07.021 09:59:37 -- accel/accel.sh@20 -- # read -r var val 00:07:07.021 09:59:37 -- accel/accel.sh@21 -- # val= 00:07:07.021 09:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.021 09:59:37 -- accel/accel.sh@20 -- # IFS=: 00:07:07.021 09:59:37 -- accel/accel.sh@20 -- # read -r var val 00:07:07.021 09:59:37 -- accel/accel.sh@21 -- # val= 00:07:07.021 09:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.021 09:59:37 -- accel/accel.sh@20 -- # IFS=: 00:07:07.021 09:59:37 -- accel/accel.sh@20 -- # read -r var val 00:07:07.021 09:59:37 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:07.021 09:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.021 09:59:37 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:07.021 09:59:37 -- accel/accel.sh@20 -- # IFS=: 00:07:07.021 09:59:37 -- accel/accel.sh@20 -- # read -r var val 00:07:07.021 09:59:37 -- accel/accel.sh@21 -- # val=0 00:07:07.021 09:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.021 09:59:37 -- accel/accel.sh@20 -- # IFS=: 00:07:07.021 09:59:37 -- accel/accel.sh@20 -- # read -r var val 00:07:07.021 09:59:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:07.021 09:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.021 09:59:37 -- accel/accel.sh@20 -- # IFS=: 00:07:07.021 09:59:37 -- accel/accel.sh@20 -- # read -r var val 00:07:07.021 09:59:37 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:07.021 09:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.021 09:59:37 -- accel/accel.sh@20 -- # IFS=: 00:07:07.021 09:59:37 -- accel/accel.sh@20 -- # read -r var val 00:07:07.021 09:59:37 -- accel/accel.sh@21 -- # val= 00:07:07.021 09:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.021 09:59:37 -- accel/accel.sh@20 -- # IFS=: 00:07:07.021 09:59:37 -- accel/accel.sh@20 -- # read -r var val 00:07:07.021 09:59:37 -- accel/accel.sh@21 -- # val=software 00:07:07.021 09:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.021 09:59:37 -- accel/accel.sh@23 -- # accel_module=software 00:07:07.021 09:59:37 -- accel/accel.sh@20 -- # IFS=: 00:07:07.021 09:59:37 -- accel/accel.sh@20 -- # read -r var val 00:07:07.021 09:59:37 -- accel/accel.sh@21 -- # val=32 00:07:07.021 09:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.021 09:59:37 -- accel/accel.sh@20 -- # IFS=: 00:07:07.021 09:59:37 -- accel/accel.sh@20 -- # read -r var val 00:07:07.022 09:59:37 -- accel/accel.sh@21 -- # val=32 00:07:07.022 09:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.022 09:59:37 -- accel/accel.sh@20 -- # IFS=: 00:07:07.022 09:59:37 -- accel/accel.sh@20 -- # read -r var val 00:07:07.022 09:59:37 -- accel/accel.sh@21 -- # val=1 00:07:07.022 09:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.022 09:59:37 -- accel/accel.sh@20 -- # IFS=: 00:07:07.022 09:59:37 -- accel/accel.sh@20 -- # read -r var val 00:07:07.022 09:59:37 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:07.022 09:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.022 09:59:37 -- accel/accel.sh@20 -- # IFS=: 00:07:07.022 09:59:37 -- accel/accel.sh@20 -- # read -r var val 00:07:07.022 09:59:37 -- accel/accel.sh@21 -- # val=Yes 00:07:07.022 09:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.022 09:59:37 -- accel/accel.sh@20 -- # IFS=: 00:07:07.022 09:59:37 -- accel/accel.sh@20 -- # read -r var val 00:07:07.022 09:59:37 -- accel/accel.sh@21 -- # val= 00:07:07.022 09:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.022 09:59:37 -- accel/accel.sh@20 -- # IFS=: 00:07:07.022 09:59:37 -- accel/accel.sh@20 -- # read -r var val 00:07:07.022 09:59:37 -- accel/accel.sh@21 -- # val= 00:07:07.022 09:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.022 09:59:37 -- accel/accel.sh@20 -- # IFS=: 00:07:07.022 09:59:37 -- accel/accel.sh@20 -- # read -r var val 00:07:08.397 09:59:38 -- accel/accel.sh@21 -- # val= 00:07:08.397 09:59:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.397 09:59:38 -- accel/accel.sh@20 -- # IFS=: 00:07:08.397 09:59:38 -- accel/accel.sh@20 -- # read -r var val 00:07:08.397 09:59:38 -- accel/accel.sh@21 -- # val= 00:07:08.397 09:59:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.397 09:59:38 -- accel/accel.sh@20 -- # IFS=: 00:07:08.397 09:59:38 -- accel/accel.sh@20 -- # read -r var val 00:07:08.397 09:59:38 -- accel/accel.sh@21 -- # val= 00:07:08.397 09:59:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.397 09:59:38 -- accel/accel.sh@20 -- # IFS=: 00:07:08.397 09:59:38 -- accel/accel.sh@20 -- # read -r var val 00:07:08.397 09:59:38 -- accel/accel.sh@21 -- # val= 00:07:08.397 09:59:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.397 09:59:38 -- accel/accel.sh@20 -- # IFS=: 00:07:08.397 09:59:38 -- accel/accel.sh@20 -- # read -r var val 00:07:08.397 ************************************ 00:07:08.397 END TEST accel_copy_crc32c 00:07:08.397 ************************************ 00:07:08.397 09:59:38 -- accel/accel.sh@21 -- # val= 00:07:08.397 09:59:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.397 09:59:38 -- accel/accel.sh@20 -- # IFS=: 00:07:08.397 09:59:38 -- accel/accel.sh@20 -- # read -r var val 00:07:08.397 09:59:38 -- accel/accel.sh@21 -- # val= 00:07:08.397 09:59:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.397 09:59:38 -- accel/accel.sh@20 -- # IFS=: 00:07:08.397 09:59:38 -- accel/accel.sh@20 -- # read -r var val 00:07:08.397 09:59:38 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:08.397 09:59:38 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:08.397 09:59:38 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.397 00:07:08.397 real 0m3.101s 00:07:08.397 user 0m2.597s 00:07:08.397 sys 0m0.298s 00:07:08.397 09:59:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.397 09:59:38 -- common/autotest_common.sh@10 -- # set +x 00:07:08.397 09:59:38 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:08.397 09:59:38 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:08.397 09:59:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:08.397 09:59:38 -- common/autotest_common.sh@10 -- # set +x 00:07:08.397 ************************************ 00:07:08.397 START TEST accel_copy_crc32c_C2 00:07:08.397 ************************************ 00:07:08.397 09:59:38 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:08.397 09:59:38 -- accel/accel.sh@16 -- # local accel_opc 00:07:08.397 09:59:38 -- accel/accel.sh@17 -- # local accel_module 00:07:08.398 09:59:38 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:08.398 09:59:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:08.398 09:59:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.398 09:59:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.398 09:59:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.398 09:59:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.398 09:59:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.398 09:59:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.398 09:59:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.398 09:59:38 -- accel/accel.sh@42 -- # jq -r . 00:07:08.398 [2024-07-14 09:59:38.737356] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:08.398 [2024-07-14 09:59:38.737459] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70627 ] 00:07:08.398 [2024-07-14 09:59:38.867234] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.656 [2024-07-14 09:59:38.974709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.033 09:59:40 -- accel/accel.sh@18 -- # out=' 00:07:10.033 SPDK Configuration: 00:07:10.034 Core mask: 0x1 00:07:10.034 00:07:10.034 Accel Perf Configuration: 00:07:10.034 Workload Type: copy_crc32c 00:07:10.034 CRC-32C seed: 0 00:07:10.034 Vector size: 4096 bytes 00:07:10.034 Transfer size: 8192 bytes 00:07:10.034 Vector count 2 00:07:10.034 Module: software 00:07:10.034 Queue depth: 32 00:07:10.034 Allocate depth: 32 00:07:10.034 # threads/core: 1 00:07:10.034 Run time: 1 seconds 00:07:10.034 Verify: Yes 00:07:10.034 00:07:10.034 Running for 1 seconds... 00:07:10.034 00:07:10.034 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:10.034 ------------------------------------------------------------------------------------ 00:07:10.034 0,0 203744/s 1591 MiB/s 0 0 00:07:10.034 ==================================================================================== 00:07:10.034 Total 203744/s 795 MiB/s 0 0' 00:07:10.034 09:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:10.034 09:59:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:10.034 09:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:10.034 09:59:40 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:10.034 09:59:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.034 09:59:40 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.034 09:59:40 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.034 09:59:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.034 09:59:40 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.034 09:59:40 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.034 09:59:40 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.034 09:59:40 -- accel/accel.sh@42 -- # jq -r . 00:07:10.034 [2024-07-14 09:59:40.289850] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:10.034 [2024-07-14 09:59:40.289945] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70646 ] 00:07:10.034 [2024-07-14 09:59:40.426705] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.034 [2024-07-14 09:59:40.539629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.293 09:59:40 -- accel/accel.sh@21 -- # val= 00:07:10.293 09:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.293 09:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:10.293 09:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:10.293 09:59:40 -- accel/accel.sh@21 -- # val= 00:07:10.293 09:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.293 09:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:10.293 09:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:10.293 09:59:40 -- accel/accel.sh@21 -- # val=0x1 00:07:10.293 09:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.293 09:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:10.293 09:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:10.293 09:59:40 -- accel/accel.sh@21 -- # val= 00:07:10.293 09:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.293 09:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:10.293 09:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:10.293 09:59:40 -- accel/accel.sh@21 -- # val= 00:07:10.293 09:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.293 09:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:10.293 09:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:10.293 09:59:40 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:10.293 09:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.293 09:59:40 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:10.293 09:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:10.293 09:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:10.293 09:59:40 -- accel/accel.sh@21 -- # val=0 00:07:10.293 09:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.293 09:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:10.293 09:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:10.293 09:59:40 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:10.293 09:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.293 09:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:10.293 09:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:10.293 09:59:40 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:10.293 09:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.293 09:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:10.293 09:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:10.293 09:59:40 -- accel/accel.sh@21 -- # val= 00:07:10.293 09:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.293 09:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:10.293 09:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:10.293 09:59:40 -- accel/accel.sh@21 -- # val=software 00:07:10.293 09:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.293 09:59:40 -- accel/accel.sh@23 -- # accel_module=software 00:07:10.293 09:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:10.293 09:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:10.293 09:59:40 -- accel/accel.sh@21 -- # val=32 00:07:10.293 09:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.293 09:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:10.293 09:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:10.293 09:59:40 -- accel/accel.sh@21 -- # val=32 00:07:10.293 09:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.293 09:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:10.293 09:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:10.293 09:59:40 -- accel/accel.sh@21 -- # val=1 00:07:10.293 09:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.293 09:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:10.293 09:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:10.294 09:59:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:10.294 09:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.294 09:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:10.294 09:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:10.294 09:59:40 -- accel/accel.sh@21 -- # val=Yes 00:07:10.294 09:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.294 09:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:10.294 09:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:10.294 09:59:40 -- accel/accel.sh@21 -- # val= 00:07:10.294 09:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.294 09:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:10.294 09:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:10.294 09:59:40 -- accel/accel.sh@21 -- # val= 00:07:10.294 09:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.294 09:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:10.294 09:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:11.672 09:59:41 -- accel/accel.sh@21 -- # val= 00:07:11.672 09:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.672 09:59:41 -- accel/accel.sh@20 -- # IFS=: 00:07:11.672 09:59:41 -- accel/accel.sh@20 -- # read -r var val 00:07:11.672 09:59:41 -- accel/accel.sh@21 -- # val= 00:07:11.672 09:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.672 09:59:41 -- accel/accel.sh@20 -- # IFS=: 00:07:11.672 09:59:41 -- accel/accel.sh@20 -- # read -r var val 00:07:11.672 09:59:41 -- accel/accel.sh@21 -- # val= 00:07:11.672 09:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.672 09:59:41 -- accel/accel.sh@20 -- # IFS=: 00:07:11.672 09:59:41 -- accel/accel.sh@20 -- # read -r var val 00:07:11.672 09:59:41 -- accel/accel.sh@21 -- # val= 00:07:11.672 09:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.672 09:59:41 -- accel/accel.sh@20 -- # IFS=: 00:07:11.672 09:59:41 -- accel/accel.sh@20 -- # read -r var val 00:07:11.672 09:59:41 -- accel/accel.sh@21 -- # val= 00:07:11.672 09:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.672 09:59:41 -- accel/accel.sh@20 -- # IFS=: 00:07:11.672 09:59:41 -- accel/accel.sh@20 -- # read -r var val 00:07:11.672 09:59:41 -- accel/accel.sh@21 -- # val= 00:07:11.672 09:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.672 09:59:41 -- accel/accel.sh@20 -- # IFS=: 00:07:11.672 09:59:41 -- accel/accel.sh@20 -- # read -r var val 00:07:11.672 09:59:41 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:11.672 09:59:41 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:11.672 09:59:41 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.672 00:07:11.672 real 0m3.121s 00:07:11.672 user 0m2.632s 00:07:11.672 sys 0m0.281s 00:07:11.672 09:59:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.672 ************************************ 00:07:11.672 END TEST accel_copy_crc32c_C2 00:07:11.672 ************************************ 00:07:11.672 09:59:41 -- common/autotest_common.sh@10 -- # set +x 00:07:11.672 09:59:41 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:11.672 09:59:41 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:11.672 09:59:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:11.672 09:59:41 -- common/autotest_common.sh@10 -- # set +x 00:07:11.672 ************************************ 00:07:11.672 START TEST accel_dualcast 00:07:11.672 ************************************ 00:07:11.672 09:59:41 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:07:11.672 09:59:41 -- accel/accel.sh@16 -- # local accel_opc 00:07:11.672 09:59:41 -- accel/accel.sh@17 -- # local accel_module 00:07:11.672 09:59:41 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:11.672 09:59:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:11.672 09:59:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.672 09:59:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.672 09:59:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.672 09:59:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.672 09:59:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.672 09:59:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.672 09:59:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.672 09:59:41 -- accel/accel.sh@42 -- # jq -r . 00:07:11.672 [2024-07-14 09:59:41.911722] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:11.672 [2024-07-14 09:59:41.911809] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70683 ] 00:07:11.672 [2024-07-14 09:59:42.052210] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.672 [2024-07-14 09:59:42.142499] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.049 09:59:43 -- accel/accel.sh@18 -- # out=' 00:07:13.049 SPDK Configuration: 00:07:13.049 Core mask: 0x1 00:07:13.049 00:07:13.049 Accel Perf Configuration: 00:07:13.049 Workload Type: dualcast 00:07:13.049 Transfer size: 4096 bytes 00:07:13.049 Vector count 1 00:07:13.049 Module: software 00:07:13.049 Queue depth: 32 00:07:13.049 Allocate depth: 32 00:07:13.049 # threads/core: 1 00:07:13.049 Run time: 1 seconds 00:07:13.049 Verify: Yes 00:07:13.049 00:07:13.049 Running for 1 seconds... 00:07:13.049 00:07:13.049 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:13.049 ------------------------------------------------------------------------------------ 00:07:13.049 0,0 357408/s 1396 MiB/s 0 0 00:07:13.049 ==================================================================================== 00:07:13.049 Total 357408/s 1396 MiB/s 0 0' 00:07:13.049 09:59:43 -- accel/accel.sh@20 -- # IFS=: 00:07:13.049 09:59:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:13.049 09:59:43 -- accel/accel.sh@20 -- # read -r var val 00:07:13.049 09:59:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:13.049 09:59:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:13.049 09:59:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:13.049 09:59:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:13.049 09:59:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:13.049 09:59:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:13.049 09:59:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:13.049 09:59:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:13.049 09:59:43 -- accel/accel.sh@42 -- # jq -r . 00:07:13.049 [2024-07-14 09:59:43.452126] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:13.049 [2024-07-14 09:59:43.452242] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70702 ] 00:07:13.049 [2024-07-14 09:59:43.590143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.307 [2024-07-14 09:59:43.716827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.307 09:59:43 -- accel/accel.sh@21 -- # val= 00:07:13.307 09:59:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.307 09:59:43 -- accel/accel.sh@20 -- # IFS=: 00:07:13.307 09:59:43 -- accel/accel.sh@20 -- # read -r var val 00:07:13.307 09:59:43 -- accel/accel.sh@21 -- # val= 00:07:13.307 09:59:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.307 09:59:43 -- accel/accel.sh@20 -- # IFS=: 00:07:13.307 09:59:43 -- accel/accel.sh@20 -- # read -r var val 00:07:13.307 09:59:43 -- accel/accel.sh@21 -- # val=0x1 00:07:13.307 09:59:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.307 09:59:43 -- accel/accel.sh@20 -- # IFS=: 00:07:13.307 09:59:43 -- accel/accel.sh@20 -- # read -r var val 00:07:13.307 09:59:43 -- accel/accel.sh@21 -- # val= 00:07:13.307 09:59:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.307 09:59:43 -- accel/accel.sh@20 -- # IFS=: 00:07:13.307 09:59:43 -- accel/accel.sh@20 -- # read -r var val 00:07:13.307 09:59:43 -- accel/accel.sh@21 -- # val= 00:07:13.307 09:59:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.307 09:59:43 -- accel/accel.sh@20 -- # IFS=: 00:07:13.307 09:59:43 -- accel/accel.sh@20 -- # read -r var val 00:07:13.307 09:59:43 -- accel/accel.sh@21 -- # val=dualcast 00:07:13.307 09:59:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.307 09:59:43 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:13.307 09:59:43 -- accel/accel.sh@20 -- # IFS=: 00:07:13.307 09:59:43 -- accel/accel.sh@20 -- # read -r var val 00:07:13.307 09:59:43 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:13.307 09:59:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.307 09:59:43 -- accel/accel.sh@20 -- # IFS=: 00:07:13.307 09:59:43 -- accel/accel.sh@20 -- # read -r var val 00:07:13.308 09:59:43 -- accel/accel.sh@21 -- # val= 00:07:13.308 09:59:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.308 09:59:43 -- accel/accel.sh@20 -- # IFS=: 00:07:13.308 09:59:43 -- accel/accel.sh@20 -- # read -r var val 00:07:13.308 09:59:43 -- accel/accel.sh@21 -- # val=software 00:07:13.308 09:59:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.308 09:59:43 -- accel/accel.sh@23 -- # accel_module=software 00:07:13.308 09:59:43 -- accel/accel.sh@20 -- # IFS=: 00:07:13.308 09:59:43 -- accel/accel.sh@20 -- # read -r var val 00:07:13.308 09:59:43 -- accel/accel.sh@21 -- # val=32 00:07:13.308 09:59:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.308 09:59:43 -- accel/accel.sh@20 -- # IFS=: 00:07:13.308 09:59:43 -- accel/accel.sh@20 -- # read -r var val 00:07:13.308 09:59:43 -- accel/accel.sh@21 -- # val=32 00:07:13.308 09:59:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.308 09:59:43 -- accel/accel.sh@20 -- # IFS=: 00:07:13.308 09:59:43 -- accel/accel.sh@20 -- # read -r var val 00:07:13.308 09:59:43 -- accel/accel.sh@21 -- # val=1 00:07:13.308 09:59:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.308 09:59:43 -- accel/accel.sh@20 -- # IFS=: 00:07:13.308 09:59:43 -- accel/accel.sh@20 -- # read -r var val 00:07:13.308 09:59:43 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:13.308 09:59:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.308 09:59:43 -- accel/accel.sh@20 -- # IFS=: 00:07:13.308 09:59:43 -- accel/accel.sh@20 -- # read -r var val 00:07:13.308 09:59:43 -- accel/accel.sh@21 -- # val=Yes 00:07:13.308 09:59:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.308 09:59:43 -- accel/accel.sh@20 -- # IFS=: 00:07:13.308 09:59:43 -- accel/accel.sh@20 -- # read -r var val 00:07:13.308 09:59:43 -- accel/accel.sh@21 -- # val= 00:07:13.308 09:59:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.308 09:59:43 -- accel/accel.sh@20 -- # IFS=: 00:07:13.308 09:59:43 -- accel/accel.sh@20 -- # read -r var val 00:07:13.308 09:59:43 -- accel/accel.sh@21 -- # val= 00:07:13.308 09:59:43 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.308 09:59:43 -- accel/accel.sh@20 -- # IFS=: 00:07:13.308 09:59:43 -- accel/accel.sh@20 -- # read -r var val 00:07:14.699 09:59:45 -- accel/accel.sh@21 -- # val= 00:07:14.699 09:59:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.699 09:59:45 -- accel/accel.sh@20 -- # IFS=: 00:07:14.699 09:59:45 -- accel/accel.sh@20 -- # read -r var val 00:07:14.699 09:59:45 -- accel/accel.sh@21 -- # val= 00:07:14.699 09:59:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.699 09:59:45 -- accel/accel.sh@20 -- # IFS=: 00:07:14.699 09:59:45 -- accel/accel.sh@20 -- # read -r var val 00:07:14.699 09:59:45 -- accel/accel.sh@21 -- # val= 00:07:14.699 09:59:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.699 09:59:45 -- accel/accel.sh@20 -- # IFS=: 00:07:14.699 ************************************ 00:07:14.699 END TEST accel_dualcast 00:07:14.699 09:59:45 -- accel/accel.sh@20 -- # read -r var val 00:07:14.699 09:59:45 -- accel/accel.sh@21 -- # val= 00:07:14.699 09:59:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.699 09:59:45 -- accel/accel.sh@20 -- # IFS=: 00:07:14.699 09:59:45 -- accel/accel.sh@20 -- # read -r var val 00:07:14.699 09:59:45 -- accel/accel.sh@21 -- # val= 00:07:14.699 09:59:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.699 09:59:45 -- accel/accel.sh@20 -- # IFS=: 00:07:14.699 09:59:45 -- accel/accel.sh@20 -- # read -r var val 00:07:14.699 09:59:45 -- accel/accel.sh@21 -- # val= 00:07:14.699 09:59:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.699 09:59:45 -- accel/accel.sh@20 -- # IFS=: 00:07:14.699 09:59:45 -- accel/accel.sh@20 -- # read -r var val 00:07:14.699 09:59:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:14.699 09:59:45 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:14.699 09:59:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.699 00:07:14.699 real 0m3.133s 00:07:14.699 user 0m2.621s 00:07:14.699 sys 0m0.303s 00:07:14.699 09:59:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.699 09:59:45 -- common/autotest_common.sh@10 -- # set +x 00:07:14.699 ************************************ 00:07:14.699 09:59:45 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:14.699 09:59:45 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:14.699 09:59:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.699 09:59:45 -- common/autotest_common.sh@10 -- # set +x 00:07:14.699 ************************************ 00:07:14.699 START TEST accel_compare 00:07:14.699 ************************************ 00:07:14.699 09:59:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:07:14.699 09:59:45 -- accel/accel.sh@16 -- # local accel_opc 00:07:14.699 09:59:45 -- accel/accel.sh@17 -- # local accel_module 00:07:14.700 09:59:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:14.700 09:59:45 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:14.700 09:59:45 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.700 09:59:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.700 09:59:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.700 09:59:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.700 09:59:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.700 09:59:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.700 09:59:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.700 09:59:45 -- accel/accel.sh@42 -- # jq -r . 00:07:14.700 [2024-07-14 09:59:45.093850] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:14.700 [2024-07-14 09:59:45.094109] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70737 ] 00:07:14.700 [2024-07-14 09:59:45.229854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.958 [2024-07-14 09:59:45.358191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.335 09:59:46 -- accel/accel.sh@18 -- # out=' 00:07:16.335 SPDK Configuration: 00:07:16.335 Core mask: 0x1 00:07:16.335 00:07:16.335 Accel Perf Configuration: 00:07:16.335 Workload Type: compare 00:07:16.335 Transfer size: 4096 bytes 00:07:16.335 Vector count 1 00:07:16.336 Module: software 00:07:16.336 Queue depth: 32 00:07:16.336 Allocate depth: 32 00:07:16.336 # threads/core: 1 00:07:16.336 Run time: 1 seconds 00:07:16.336 Verify: Yes 00:07:16.336 00:07:16.336 Running for 1 seconds... 00:07:16.336 00:07:16.336 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:16.336 ------------------------------------------------------------------------------------ 00:07:16.336 0,0 458336/s 1790 MiB/s 0 0 00:07:16.336 ==================================================================================== 00:07:16.336 Total 458336/s 1790 MiB/s 0 0' 00:07:16.336 09:59:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:16.336 09:59:46 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:16.336 09:59:46 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.336 09:59:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.336 09:59:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.336 09:59:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.336 09:59:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.336 09:59:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.336 09:59:46 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.336 09:59:46 -- accel/accel.sh@42 -- # jq -r . 00:07:16.336 [2024-07-14 09:59:46.606764] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:16.336 [2024-07-14 09:59:46.606863] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70762 ] 00:07:16.336 [2024-07-14 09:59:46.744177] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.336 [2024-07-14 09:59:46.808934] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.336 09:59:46 -- accel/accel.sh@21 -- # val= 00:07:16.336 09:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:16.336 09:59:46 -- accel/accel.sh@21 -- # val= 00:07:16.336 09:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:16.336 09:59:46 -- accel/accel.sh@21 -- # val=0x1 00:07:16.336 09:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:16.336 09:59:46 -- accel/accel.sh@21 -- # val= 00:07:16.336 09:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:16.336 09:59:46 -- accel/accel.sh@21 -- # val= 00:07:16.336 09:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:16.336 09:59:46 -- accel/accel.sh@21 -- # val=compare 00:07:16.336 09:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.336 09:59:46 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:16.336 09:59:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:16.336 09:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:16.336 09:59:46 -- accel/accel.sh@21 -- # val= 00:07:16.336 09:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:16.336 09:59:46 -- accel/accel.sh@21 -- # val=software 00:07:16.336 09:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.336 09:59:46 -- accel/accel.sh@23 -- # accel_module=software 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:16.336 09:59:46 -- accel/accel.sh@21 -- # val=32 00:07:16.336 09:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:16.336 09:59:46 -- accel/accel.sh@21 -- # val=32 00:07:16.336 09:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:16.336 09:59:46 -- accel/accel.sh@21 -- # val=1 00:07:16.336 09:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:16.336 09:59:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:16.336 09:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:16.336 09:59:46 -- accel/accel.sh@21 -- # val=Yes 00:07:16.336 09:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:16.336 09:59:46 -- accel/accel.sh@21 -- # val= 00:07:16.336 09:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:16.336 09:59:46 -- accel/accel.sh@21 -- # val= 00:07:16.336 09:59:46 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # IFS=: 00:07:16.336 09:59:46 -- accel/accel.sh@20 -- # read -r var val 00:07:17.711 09:59:48 -- accel/accel.sh@21 -- # val= 00:07:17.711 09:59:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.711 09:59:48 -- accel/accel.sh@20 -- # IFS=: 00:07:17.711 09:59:48 -- accel/accel.sh@20 -- # read -r var val 00:07:17.711 09:59:48 -- accel/accel.sh@21 -- # val= 00:07:17.711 09:59:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.711 09:59:48 -- accel/accel.sh@20 -- # IFS=: 00:07:17.711 09:59:48 -- accel/accel.sh@20 -- # read -r var val 00:07:17.711 09:59:48 -- accel/accel.sh@21 -- # val= 00:07:17.711 09:59:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.711 09:59:48 -- accel/accel.sh@20 -- # IFS=: 00:07:17.711 09:59:48 -- accel/accel.sh@20 -- # read -r var val 00:07:17.711 09:59:48 -- accel/accel.sh@21 -- # val= 00:07:17.711 09:59:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.711 09:59:48 -- accel/accel.sh@20 -- # IFS=: 00:07:17.711 09:59:48 -- accel/accel.sh@20 -- # read -r var val 00:07:17.711 09:59:48 -- accel/accel.sh@21 -- # val= 00:07:17.711 09:59:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.711 09:59:48 -- accel/accel.sh@20 -- # IFS=: 00:07:17.711 09:59:48 -- accel/accel.sh@20 -- # read -r var val 00:07:17.711 09:59:48 -- accel/accel.sh@21 -- # val= 00:07:17.711 ************************************ 00:07:17.711 09:59:48 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.711 09:59:48 -- accel/accel.sh@20 -- # IFS=: 00:07:17.711 09:59:48 -- accel/accel.sh@20 -- # read -r var val 00:07:17.711 09:59:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:17.711 09:59:48 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:17.711 09:59:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.711 00:07:17.711 real 0m2.946s 00:07:17.711 user 0m2.488s 00:07:17.711 sys 0m0.251s 00:07:17.711 09:59:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.711 09:59:48 -- common/autotest_common.sh@10 -- # set +x 00:07:17.711 END TEST accel_compare 00:07:17.711 ************************************ 00:07:17.711 09:59:48 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:17.711 09:59:48 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:17.711 09:59:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.711 09:59:48 -- common/autotest_common.sh@10 -- # set +x 00:07:17.711 ************************************ 00:07:17.711 START TEST accel_xor 00:07:17.711 ************************************ 00:07:17.711 09:59:48 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:07:17.711 09:59:48 -- accel/accel.sh@16 -- # local accel_opc 00:07:17.711 09:59:48 -- accel/accel.sh@17 -- # local accel_module 00:07:17.711 09:59:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:17.711 09:59:48 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:17.711 09:59:48 -- accel/accel.sh@12 -- # build_accel_config 00:07:17.711 09:59:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:17.711 09:59:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:17.711 09:59:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:17.711 09:59:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:17.711 09:59:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:17.711 09:59:48 -- accel/accel.sh@41 -- # local IFS=, 00:07:17.711 09:59:48 -- accel/accel.sh@42 -- # jq -r . 00:07:17.711 [2024-07-14 09:59:48.096534] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:17.711 [2024-07-14 09:59:48.096627] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70791 ] 00:07:17.711 [2024-07-14 09:59:48.235575] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.970 [2024-07-14 09:59:48.327445] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.344 09:59:49 -- accel/accel.sh@18 -- # out=' 00:07:19.344 SPDK Configuration: 00:07:19.344 Core mask: 0x1 00:07:19.344 00:07:19.344 Accel Perf Configuration: 00:07:19.344 Workload Type: xor 00:07:19.344 Source buffers: 2 00:07:19.344 Transfer size: 4096 bytes 00:07:19.344 Vector count 1 00:07:19.344 Module: software 00:07:19.344 Queue depth: 32 00:07:19.344 Allocate depth: 32 00:07:19.344 # threads/core: 1 00:07:19.345 Run time: 1 seconds 00:07:19.345 Verify: Yes 00:07:19.345 00:07:19.345 Running for 1 seconds... 00:07:19.345 00:07:19.345 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:19.345 ------------------------------------------------------------------------------------ 00:07:19.345 0,0 252832/s 987 MiB/s 0 0 00:07:19.345 ==================================================================================== 00:07:19.345 Total 252832/s 987 MiB/s 0 0' 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:19.345 09:59:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:19.345 09:59:49 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.345 09:59:49 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:19.345 09:59:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.345 09:59:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.345 09:59:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.345 09:59:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.345 09:59:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.345 09:59:49 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.345 09:59:49 -- accel/accel.sh@42 -- # jq -r . 00:07:19.345 [2024-07-14 09:59:49.563017] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:19.345 [2024-07-14 09:59:49.563103] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70805 ] 00:07:19.345 [2024-07-14 09:59:49.702529] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.345 [2024-07-14 09:59:49.789213] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.345 09:59:49 -- accel/accel.sh@21 -- # val= 00:07:19.345 09:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:19.345 09:59:49 -- accel/accel.sh@21 -- # val= 00:07:19.345 09:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:19.345 09:59:49 -- accel/accel.sh@21 -- # val=0x1 00:07:19.345 09:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:19.345 09:59:49 -- accel/accel.sh@21 -- # val= 00:07:19.345 09:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:19.345 09:59:49 -- accel/accel.sh@21 -- # val= 00:07:19.345 09:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:19.345 09:59:49 -- accel/accel.sh@21 -- # val=xor 00:07:19.345 09:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.345 09:59:49 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:19.345 09:59:49 -- accel/accel.sh@21 -- # val=2 00:07:19.345 09:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:19.345 09:59:49 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:19.345 09:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:19.345 09:59:49 -- accel/accel.sh@21 -- # val= 00:07:19.345 09:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:19.345 09:59:49 -- accel/accel.sh@21 -- # val=software 00:07:19.345 09:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.345 09:59:49 -- accel/accel.sh@23 -- # accel_module=software 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:19.345 09:59:49 -- accel/accel.sh@21 -- # val=32 00:07:19.345 09:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:19.345 09:59:49 -- accel/accel.sh@21 -- # val=32 00:07:19.345 09:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:19.345 09:59:49 -- accel/accel.sh@21 -- # val=1 00:07:19.345 09:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:19.345 09:59:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:19.345 09:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:19.345 09:59:49 -- accel/accel.sh@21 -- # val=Yes 00:07:19.345 09:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:19.345 09:59:49 -- accel/accel.sh@21 -- # val= 00:07:19.345 09:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:19.345 09:59:49 -- accel/accel.sh@21 -- # val= 00:07:19.345 09:59:49 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # IFS=: 00:07:19.345 09:59:49 -- accel/accel.sh@20 -- # read -r var val 00:07:20.721 09:59:50 -- accel/accel.sh@21 -- # val= 00:07:20.721 09:59:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.721 09:59:50 -- accel/accel.sh@20 -- # IFS=: 00:07:20.721 09:59:50 -- accel/accel.sh@20 -- # read -r var val 00:07:20.721 09:59:50 -- accel/accel.sh@21 -- # val= 00:07:20.721 09:59:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.721 09:59:50 -- accel/accel.sh@20 -- # IFS=: 00:07:20.721 09:59:50 -- accel/accel.sh@20 -- # read -r var val 00:07:20.721 09:59:50 -- accel/accel.sh@21 -- # val= 00:07:20.721 09:59:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.721 09:59:50 -- accel/accel.sh@20 -- # IFS=: 00:07:20.721 09:59:50 -- accel/accel.sh@20 -- # read -r var val 00:07:20.721 09:59:50 -- accel/accel.sh@21 -- # val= 00:07:20.721 09:59:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.721 09:59:50 -- accel/accel.sh@20 -- # IFS=: 00:07:20.721 09:59:50 -- accel/accel.sh@20 -- # read -r var val 00:07:20.721 09:59:50 -- accel/accel.sh@21 -- # val= 00:07:20.721 09:59:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.721 09:59:50 -- accel/accel.sh@20 -- # IFS=: 00:07:20.721 09:59:50 -- accel/accel.sh@20 -- # read -r var val 00:07:20.721 09:59:50 -- accel/accel.sh@21 -- # val= 00:07:20.721 09:59:50 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.721 09:59:50 -- accel/accel.sh@20 -- # IFS=: 00:07:20.721 09:59:50 -- accel/accel.sh@20 -- # read -r var val 00:07:20.721 09:59:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:20.721 09:59:50 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:20.721 09:59:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.721 00:07:20.721 real 0m2.912s 00:07:20.721 user 0m2.469s 00:07:20.721 sys 0m0.236s 00:07:20.721 09:59:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.721 ************************************ 00:07:20.721 END TEST accel_xor 00:07:20.721 ************************************ 00:07:20.721 09:59:50 -- common/autotest_common.sh@10 -- # set +x 00:07:20.721 09:59:51 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:20.721 09:59:51 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:20.721 09:59:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:20.721 09:59:51 -- common/autotest_common.sh@10 -- # set +x 00:07:20.721 ************************************ 00:07:20.721 START TEST accel_xor 00:07:20.721 ************************************ 00:07:20.721 09:59:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:07:20.721 09:59:51 -- accel/accel.sh@16 -- # local accel_opc 00:07:20.721 09:59:51 -- accel/accel.sh@17 -- # local accel_module 00:07:20.721 09:59:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:20.721 09:59:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:20.721 09:59:51 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.721 09:59:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.721 09:59:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.721 09:59:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.721 09:59:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.721 09:59:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.721 09:59:51 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.721 09:59:51 -- accel/accel.sh@42 -- # jq -r . 00:07:20.721 [2024-07-14 09:59:51.056162] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:20.722 [2024-07-14 09:59:51.056279] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70845 ] 00:07:20.722 [2024-07-14 09:59:51.184878] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.722 [2024-07-14 09:59:51.257779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.095 09:59:52 -- accel/accel.sh@18 -- # out=' 00:07:22.095 SPDK Configuration: 00:07:22.095 Core mask: 0x1 00:07:22.095 00:07:22.095 Accel Perf Configuration: 00:07:22.095 Workload Type: xor 00:07:22.095 Source buffers: 3 00:07:22.095 Transfer size: 4096 bytes 00:07:22.095 Vector count 1 00:07:22.095 Module: software 00:07:22.095 Queue depth: 32 00:07:22.095 Allocate depth: 32 00:07:22.095 # threads/core: 1 00:07:22.095 Run time: 1 seconds 00:07:22.095 Verify: Yes 00:07:22.095 00:07:22.095 Running for 1 seconds... 00:07:22.095 00:07:22.095 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:22.095 ------------------------------------------------------------------------------------ 00:07:22.095 0,0 251872/s 983 MiB/s 0 0 00:07:22.095 ==================================================================================== 00:07:22.095 Total 251872/s 983 MiB/s 0 0' 00:07:22.095 09:59:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:22.095 09:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.095 09:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.095 09:59:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:22.095 09:59:52 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.095 09:59:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.095 09:59:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.095 09:59:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.095 09:59:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.095 09:59:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.095 09:59:52 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.095 09:59:52 -- accel/accel.sh@42 -- # jq -r . 00:07:22.095 [2024-07-14 09:59:52.479758] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:22.095 [2024-07-14 09:59:52.479874] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70859 ] 00:07:22.095 [2024-07-14 09:59:52.615884] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.353 [2024-07-14 09:59:52.685441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.353 09:59:52 -- accel/accel.sh@21 -- # val= 00:07:22.353 09:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.353 09:59:52 -- accel/accel.sh@21 -- # val= 00:07:22.353 09:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.353 09:59:52 -- accel/accel.sh@21 -- # val=0x1 00:07:22.353 09:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.353 09:59:52 -- accel/accel.sh@21 -- # val= 00:07:22.353 09:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.353 09:59:52 -- accel/accel.sh@21 -- # val= 00:07:22.353 09:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.353 09:59:52 -- accel/accel.sh@21 -- # val=xor 00:07:22.353 09:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.353 09:59:52 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.353 09:59:52 -- accel/accel.sh@21 -- # val=3 00:07:22.353 09:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.353 09:59:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:22.353 09:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.353 09:59:52 -- accel/accel.sh@21 -- # val= 00:07:22.353 09:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.353 09:59:52 -- accel/accel.sh@21 -- # val=software 00:07:22.353 09:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.353 09:59:52 -- accel/accel.sh@23 -- # accel_module=software 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.353 09:59:52 -- accel/accel.sh@21 -- # val=32 00:07:22.353 09:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.353 09:59:52 -- accel/accel.sh@21 -- # val=32 00:07:22.353 09:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.353 09:59:52 -- accel/accel.sh@21 -- # val=1 00:07:22.353 09:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.353 09:59:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:22.353 09:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.353 09:59:52 -- accel/accel.sh@21 -- # val=Yes 00:07:22.353 09:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.353 09:59:52 -- accel/accel.sh@21 -- # val= 00:07:22.353 09:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:22.353 09:59:52 -- accel/accel.sh@21 -- # val= 00:07:22.353 09:59:52 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # IFS=: 00:07:22.353 09:59:52 -- accel/accel.sh@20 -- # read -r var val 00:07:23.757 09:59:53 -- accel/accel.sh@21 -- # val= 00:07:23.757 09:59:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.757 09:59:53 -- accel/accel.sh@20 -- # IFS=: 00:07:23.757 09:59:53 -- accel/accel.sh@20 -- # read -r var val 00:07:23.757 09:59:53 -- accel/accel.sh@21 -- # val= 00:07:23.757 09:59:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.757 09:59:53 -- accel/accel.sh@20 -- # IFS=: 00:07:23.757 09:59:53 -- accel/accel.sh@20 -- # read -r var val 00:07:23.757 09:59:53 -- accel/accel.sh@21 -- # val= 00:07:23.757 09:59:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.757 09:59:53 -- accel/accel.sh@20 -- # IFS=: 00:07:23.757 09:59:53 -- accel/accel.sh@20 -- # read -r var val 00:07:23.757 09:59:53 -- accel/accel.sh@21 -- # val= 00:07:23.757 09:59:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.757 09:59:53 -- accel/accel.sh@20 -- # IFS=: 00:07:23.757 09:59:53 -- accel/accel.sh@20 -- # read -r var val 00:07:23.757 09:59:53 -- accel/accel.sh@21 -- # val= 00:07:23.757 09:59:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.757 09:59:53 -- accel/accel.sh@20 -- # IFS=: 00:07:23.757 09:59:53 -- accel/accel.sh@20 -- # read -r var val 00:07:23.757 09:59:53 -- accel/accel.sh@21 -- # val= 00:07:23.757 09:59:53 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.757 09:59:53 -- accel/accel.sh@20 -- # IFS=: 00:07:23.757 09:59:53 -- accel/accel.sh@20 -- # read -r var val 00:07:23.757 09:59:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:23.757 09:59:53 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:23.757 09:59:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.757 00:07:23.757 real 0m2.861s 00:07:23.757 user 0m2.426s 00:07:23.757 sys 0m0.228s 00:07:23.757 09:59:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.757 ************************************ 00:07:23.757 END TEST accel_xor 00:07:23.757 ************************************ 00:07:23.757 09:59:53 -- common/autotest_common.sh@10 -- # set +x 00:07:23.757 09:59:53 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:23.757 09:59:53 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:23.757 09:59:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:23.757 09:59:53 -- common/autotest_common.sh@10 -- # set +x 00:07:23.757 ************************************ 00:07:23.757 START TEST accel_dif_verify 00:07:23.757 ************************************ 00:07:23.757 09:59:53 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:07:23.757 09:59:53 -- accel/accel.sh@16 -- # local accel_opc 00:07:23.757 09:59:53 -- accel/accel.sh@17 -- # local accel_module 00:07:23.757 09:59:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:23.757 09:59:53 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:23.757 09:59:53 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.757 09:59:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.757 09:59:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.757 09:59:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.757 09:59:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.757 09:59:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.757 09:59:53 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.757 09:59:53 -- accel/accel.sh@42 -- # jq -r . 00:07:23.757 [2024-07-14 09:59:53.970570] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:23.757 [2024-07-14 09:59:53.970681] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70899 ] 00:07:23.757 [2024-07-14 09:59:54.107576] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.757 [2024-07-14 09:59:54.177923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.128 09:59:55 -- accel/accel.sh@18 -- # out=' 00:07:25.128 SPDK Configuration: 00:07:25.128 Core mask: 0x1 00:07:25.128 00:07:25.128 Accel Perf Configuration: 00:07:25.128 Workload Type: dif_verify 00:07:25.128 Vector size: 4096 bytes 00:07:25.128 Transfer size: 4096 bytes 00:07:25.128 Block size: 512 bytes 00:07:25.128 Metadata size: 8 bytes 00:07:25.128 Vector count 1 00:07:25.128 Module: software 00:07:25.128 Queue depth: 32 00:07:25.128 Allocate depth: 32 00:07:25.128 # threads/core: 1 00:07:25.128 Run time: 1 seconds 00:07:25.128 Verify: No 00:07:25.128 00:07:25.128 Running for 1 seconds... 00:07:25.128 00:07:25.128 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:25.128 ------------------------------------------------------------------------------------ 00:07:25.128 0,0 113408/s 449 MiB/s 0 0 00:07:25.128 ==================================================================================== 00:07:25.128 Total 113408/s 443 MiB/s 0 0' 00:07:25.128 09:59:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.128 09:59:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:25.128 09:59:55 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.128 09:59:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.128 09:59:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.128 09:59:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.128 09:59:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.128 09:59:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.128 09:59:55 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.128 09:59:55 -- accel/accel.sh@42 -- # jq -r . 00:07:25.128 [2024-07-14 09:59:55.387979] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:25.128 [2024-07-14 09:59:55.388074] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70913 ] 00:07:25.128 [2024-07-14 09:59:55.520097] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.128 [2024-07-14 09:59:55.592521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.128 09:59:55 -- accel/accel.sh@21 -- # val= 00:07:25.128 09:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.128 09:59:55 -- accel/accel.sh@21 -- # val= 00:07:25.128 09:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.128 09:59:55 -- accel/accel.sh@21 -- # val=0x1 00:07:25.128 09:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.128 09:59:55 -- accel/accel.sh@21 -- # val= 00:07:25.128 09:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.128 09:59:55 -- accel/accel.sh@21 -- # val= 00:07:25.128 09:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.128 09:59:55 -- accel/accel.sh@21 -- # val=dif_verify 00:07:25.128 09:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.128 09:59:55 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.128 09:59:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:25.128 09:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.128 09:59:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:25.128 09:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.128 09:59:55 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:25.128 09:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.128 09:59:55 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:25.128 09:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.128 09:59:55 -- accel/accel.sh@21 -- # val= 00:07:25.128 09:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.128 09:59:55 -- accel/accel.sh@21 -- # val=software 00:07:25.128 09:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.128 09:59:55 -- accel/accel.sh@23 -- # accel_module=software 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.128 09:59:55 -- accel/accel.sh@21 -- # val=32 00:07:25.128 09:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.128 09:59:55 -- accel/accel.sh@21 -- # val=32 00:07:25.128 09:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.128 09:59:55 -- accel/accel.sh@21 -- # val=1 00:07:25.128 09:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.128 09:59:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:25.128 09:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.128 09:59:55 -- accel/accel.sh@21 -- # val=No 00:07:25.128 09:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.128 09:59:55 -- accel/accel.sh@21 -- # val= 00:07:25.128 09:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.128 09:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.129 09:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:25.129 09:59:55 -- accel/accel.sh@21 -- # val= 00:07:25.129 09:59:55 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.129 09:59:55 -- accel/accel.sh@20 -- # IFS=: 00:07:25.129 09:59:55 -- accel/accel.sh@20 -- # read -r var val 00:07:26.500 09:59:56 -- accel/accel.sh@21 -- # val= 00:07:26.501 09:59:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.501 09:59:56 -- accel/accel.sh@20 -- # IFS=: 00:07:26.501 09:59:56 -- accel/accel.sh@20 -- # read -r var val 00:07:26.501 09:59:56 -- accel/accel.sh@21 -- # val= 00:07:26.501 09:59:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.501 09:59:56 -- accel/accel.sh@20 -- # IFS=: 00:07:26.501 09:59:56 -- accel/accel.sh@20 -- # read -r var val 00:07:26.501 09:59:56 -- accel/accel.sh@21 -- # val= 00:07:26.501 09:59:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.501 09:59:56 -- accel/accel.sh@20 -- # IFS=: 00:07:26.501 09:59:56 -- accel/accel.sh@20 -- # read -r var val 00:07:26.501 09:59:56 -- accel/accel.sh@21 -- # val= 00:07:26.501 ************************************ 00:07:26.501 END TEST accel_dif_verify 00:07:26.501 ************************************ 00:07:26.501 09:59:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.501 09:59:56 -- accel/accel.sh@20 -- # IFS=: 00:07:26.501 09:59:56 -- accel/accel.sh@20 -- # read -r var val 00:07:26.501 09:59:56 -- accel/accel.sh@21 -- # val= 00:07:26.501 09:59:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.501 09:59:56 -- accel/accel.sh@20 -- # IFS=: 00:07:26.501 09:59:56 -- accel/accel.sh@20 -- # read -r var val 00:07:26.501 09:59:56 -- accel/accel.sh@21 -- # val= 00:07:26.501 09:59:56 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.501 09:59:56 -- accel/accel.sh@20 -- # IFS=: 00:07:26.501 09:59:56 -- accel/accel.sh@20 -- # read -r var val 00:07:26.501 09:59:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:26.501 09:59:56 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:26.501 09:59:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.501 00:07:26.501 real 0m2.851s 00:07:26.501 user 0m2.432s 00:07:26.501 sys 0m0.216s 00:07:26.501 09:59:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.501 09:59:56 -- common/autotest_common.sh@10 -- # set +x 00:07:26.501 09:59:56 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:26.501 09:59:56 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:26.501 09:59:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:26.501 09:59:56 -- common/autotest_common.sh@10 -- # set +x 00:07:26.501 ************************************ 00:07:26.501 START TEST accel_dif_generate 00:07:26.501 ************************************ 00:07:26.501 09:59:56 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:07:26.501 09:59:56 -- accel/accel.sh@16 -- # local accel_opc 00:07:26.501 09:59:56 -- accel/accel.sh@17 -- # local accel_module 00:07:26.501 09:59:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:26.501 09:59:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:26.501 09:59:56 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.501 09:59:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.501 09:59:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.501 09:59:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.501 09:59:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.501 09:59:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.501 09:59:56 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.501 09:59:56 -- accel/accel.sh@42 -- # jq -r . 00:07:26.501 [2024-07-14 09:59:56.882273] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:26.501 [2024-07-14 09:59:56.882431] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70948 ] 00:07:26.501 [2024-07-14 09:59:57.024437] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.758 [2024-07-14 09:59:57.086197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.132 09:59:58 -- accel/accel.sh@18 -- # out=' 00:07:28.132 SPDK Configuration: 00:07:28.132 Core mask: 0x1 00:07:28.132 00:07:28.132 Accel Perf Configuration: 00:07:28.132 Workload Type: dif_generate 00:07:28.132 Vector size: 4096 bytes 00:07:28.132 Transfer size: 4096 bytes 00:07:28.132 Block size: 512 bytes 00:07:28.133 Metadata size: 8 bytes 00:07:28.133 Vector count 1 00:07:28.133 Module: software 00:07:28.133 Queue depth: 32 00:07:28.133 Allocate depth: 32 00:07:28.133 # threads/core: 1 00:07:28.133 Run time: 1 seconds 00:07:28.133 Verify: No 00:07:28.133 00:07:28.133 Running for 1 seconds... 00:07:28.133 00:07:28.133 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:28.133 ------------------------------------------------------------------------------------ 00:07:28.133 0,0 137088/s 543 MiB/s 0 0 00:07:28.133 ==================================================================================== 00:07:28.133 Total 137088/s 535 MiB/s 0 0' 00:07:28.133 09:59:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.133 09:59:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:28.133 09:59:58 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.133 09:59:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.133 09:59:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.133 09:59:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.133 09:59:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.133 09:59:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:28.133 09:59:58 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.133 09:59:58 -- accel/accel.sh@42 -- # jq -r . 00:07:28.133 [2024-07-14 09:59:58.308848] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:28.133 [2024-07-14 09:59:58.308961] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70967 ] 00:07:28.133 [2024-07-14 09:59:58.439285] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.133 [2024-07-14 09:59:58.512295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.133 09:59:58 -- accel/accel.sh@21 -- # val= 00:07:28.133 09:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.133 09:59:58 -- accel/accel.sh@21 -- # val= 00:07:28.133 09:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.133 09:59:58 -- accel/accel.sh@21 -- # val=0x1 00:07:28.133 09:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.133 09:59:58 -- accel/accel.sh@21 -- # val= 00:07:28.133 09:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.133 09:59:58 -- accel/accel.sh@21 -- # val= 00:07:28.133 09:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.133 09:59:58 -- accel/accel.sh@21 -- # val=dif_generate 00:07:28.133 09:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.133 09:59:58 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.133 09:59:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:28.133 09:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.133 09:59:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:28.133 09:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.133 09:59:58 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:28.133 09:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.133 09:59:58 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:28.133 09:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.133 09:59:58 -- accel/accel.sh@21 -- # val= 00:07:28.133 09:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.133 09:59:58 -- accel/accel.sh@21 -- # val=software 00:07:28.133 09:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.133 09:59:58 -- accel/accel.sh@23 -- # accel_module=software 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.133 09:59:58 -- accel/accel.sh@21 -- # val=32 00:07:28.133 09:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.133 09:59:58 -- accel/accel.sh@21 -- # val=32 00:07:28.133 09:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.133 09:59:58 -- accel/accel.sh@21 -- # val=1 00:07:28.133 09:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.133 09:59:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:28.133 09:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.133 09:59:58 -- accel/accel.sh@21 -- # val=No 00:07:28.133 09:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.133 09:59:58 -- accel/accel.sh@21 -- # val= 00:07:28.133 09:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:28.133 09:59:58 -- accel/accel.sh@21 -- # val= 00:07:28.133 09:59:58 -- accel/accel.sh@22 -- # case "$var" in 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # IFS=: 00:07:28.133 09:59:58 -- accel/accel.sh@20 -- # read -r var val 00:07:29.506 09:59:59 -- accel/accel.sh@21 -- # val= 00:07:29.506 09:59:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.506 09:59:59 -- accel/accel.sh@20 -- # IFS=: 00:07:29.506 09:59:59 -- accel/accel.sh@20 -- # read -r var val 00:07:29.506 09:59:59 -- accel/accel.sh@21 -- # val= 00:07:29.506 09:59:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.506 09:59:59 -- accel/accel.sh@20 -- # IFS=: 00:07:29.506 09:59:59 -- accel/accel.sh@20 -- # read -r var val 00:07:29.506 09:59:59 -- accel/accel.sh@21 -- # val= 00:07:29.506 09:59:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.506 09:59:59 -- accel/accel.sh@20 -- # IFS=: 00:07:29.506 09:59:59 -- accel/accel.sh@20 -- # read -r var val 00:07:29.506 09:59:59 -- accel/accel.sh@21 -- # val= 00:07:29.506 09:59:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.506 09:59:59 -- accel/accel.sh@20 -- # IFS=: 00:07:29.506 09:59:59 -- accel/accel.sh@20 -- # read -r var val 00:07:29.506 09:59:59 -- accel/accel.sh@21 -- # val= 00:07:29.506 09:59:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.506 09:59:59 -- accel/accel.sh@20 -- # IFS=: 00:07:29.506 09:59:59 -- accel/accel.sh@20 -- # read -r var val 00:07:29.506 09:59:59 -- accel/accel.sh@21 -- # val= 00:07:29.506 09:59:59 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.506 09:59:59 -- accel/accel.sh@20 -- # IFS=: 00:07:29.506 09:59:59 -- accel/accel.sh@20 -- # read -r var val 00:07:29.506 09:59:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:29.506 09:59:59 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:29.506 09:59:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.506 00:07:29.506 real 0m2.864s 00:07:29.506 user 0m2.423s 00:07:29.506 sys 0m0.239s 00:07:29.506 09:59:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.506 09:59:59 -- common/autotest_common.sh@10 -- # set +x 00:07:29.506 ************************************ 00:07:29.506 END TEST accel_dif_generate 00:07:29.506 ************************************ 00:07:29.506 09:59:59 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:29.506 09:59:59 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:29.506 09:59:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:29.506 09:59:59 -- common/autotest_common.sh@10 -- # set +x 00:07:29.506 ************************************ 00:07:29.506 START TEST accel_dif_generate_copy 00:07:29.506 ************************************ 00:07:29.506 09:59:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:07:29.506 09:59:59 -- accel/accel.sh@16 -- # local accel_opc 00:07:29.506 09:59:59 -- accel/accel.sh@17 -- # local accel_module 00:07:29.506 09:59:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:29.506 09:59:59 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.506 09:59:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:29.506 09:59:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.506 09:59:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.506 09:59:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.506 09:59:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.506 09:59:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.506 09:59:59 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.506 09:59:59 -- accel/accel.sh@42 -- # jq -r . 00:07:29.506 [2024-07-14 09:59:59.795009] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:29.506 [2024-07-14 09:59:59.795083] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71001 ] 00:07:29.506 [2024-07-14 09:59:59.920625] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.506 [2024-07-14 09:59:59.976165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.886 10:00:01 -- accel/accel.sh@18 -- # out=' 00:07:30.886 SPDK Configuration: 00:07:30.886 Core mask: 0x1 00:07:30.886 00:07:30.886 Accel Perf Configuration: 00:07:30.886 Workload Type: dif_generate_copy 00:07:30.886 Vector size: 4096 bytes 00:07:30.886 Transfer size: 4096 bytes 00:07:30.886 Vector count 1 00:07:30.886 Module: software 00:07:30.886 Queue depth: 32 00:07:30.886 Allocate depth: 32 00:07:30.886 # threads/core: 1 00:07:30.886 Run time: 1 seconds 00:07:30.886 Verify: No 00:07:30.886 00:07:30.886 Running for 1 seconds... 00:07:30.886 00:07:30.886 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:30.886 ------------------------------------------------------------------------------------ 00:07:30.886 0,0 100064/s 396 MiB/s 0 0 00:07:30.886 ==================================================================================== 00:07:30.886 Total 100064/s 390 MiB/s 0 0' 00:07:30.886 10:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:30.886 10:00:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:30.886 10:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:30.886 10:00:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.886 10:00:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:30.886 10:00:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.886 10:00:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.886 10:00:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.886 10:00:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.886 10:00:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.886 10:00:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.886 10:00:01 -- accel/accel.sh@42 -- # jq -r . 00:07:30.886 [2024-07-14 10:00:01.216317] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:30.886 [2024-07-14 10:00:01.216424] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71021 ] 00:07:30.886 [2024-07-14 10:00:01.351901] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.886 [2024-07-14 10:00:01.423334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.142 10:00:01 -- accel/accel.sh@21 -- # val= 00:07:31.142 10:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.142 10:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:31.142 10:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:31.142 10:00:01 -- accel/accel.sh@21 -- # val= 00:07:31.142 10:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.142 10:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:31.142 10:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:31.142 10:00:01 -- accel/accel.sh@21 -- # val=0x1 00:07:31.142 10:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.142 10:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:31.142 10:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:31.142 10:00:01 -- accel/accel.sh@21 -- # val= 00:07:31.142 10:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.143 10:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:31.143 10:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:31.143 10:00:01 -- accel/accel.sh@21 -- # val= 00:07:31.143 10:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.143 10:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:31.143 10:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:31.143 10:00:01 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:31.143 10:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.143 10:00:01 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:31.143 10:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:31.143 10:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:31.143 10:00:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:31.143 10:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.143 10:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:31.143 10:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:31.143 10:00:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:31.143 10:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.143 10:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:31.143 10:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:31.143 10:00:01 -- accel/accel.sh@21 -- # val= 00:07:31.143 10:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.143 10:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:31.143 10:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:31.143 10:00:01 -- accel/accel.sh@21 -- # val=software 00:07:31.143 10:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.143 10:00:01 -- accel/accel.sh@23 -- # accel_module=software 00:07:31.143 10:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:31.143 10:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:31.143 10:00:01 -- accel/accel.sh@21 -- # val=32 00:07:31.143 10:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.143 10:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:31.143 10:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:31.143 10:00:01 -- accel/accel.sh@21 -- # val=32 00:07:31.143 10:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.143 10:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:31.143 10:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:31.143 10:00:01 -- accel/accel.sh@21 -- # val=1 00:07:31.143 10:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.143 10:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:31.143 10:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:31.143 10:00:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:31.143 10:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.143 10:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:31.143 10:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:31.143 10:00:01 -- accel/accel.sh@21 -- # val=No 00:07:31.143 10:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.143 10:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:31.143 10:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:31.143 10:00:01 -- accel/accel.sh@21 -- # val= 00:07:31.143 10:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.143 10:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:31.143 10:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:31.143 10:00:01 -- accel/accel.sh@21 -- # val= 00:07:31.143 10:00:01 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.143 10:00:01 -- accel/accel.sh@20 -- # IFS=: 00:07:31.143 10:00:01 -- accel/accel.sh@20 -- # read -r var val 00:07:32.510 10:00:02 -- accel/accel.sh@21 -- # val= 00:07:32.510 10:00:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.510 10:00:02 -- accel/accel.sh@20 -- # IFS=: 00:07:32.510 10:00:02 -- accel/accel.sh@20 -- # read -r var val 00:07:32.510 10:00:02 -- accel/accel.sh@21 -- # val= 00:07:32.510 10:00:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.510 10:00:02 -- accel/accel.sh@20 -- # IFS=: 00:07:32.510 10:00:02 -- accel/accel.sh@20 -- # read -r var val 00:07:32.510 10:00:02 -- accel/accel.sh@21 -- # val= 00:07:32.510 10:00:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.510 10:00:02 -- accel/accel.sh@20 -- # IFS=: 00:07:32.510 10:00:02 -- accel/accel.sh@20 -- # read -r var val 00:07:32.510 10:00:02 -- accel/accel.sh@21 -- # val= 00:07:32.510 10:00:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.510 10:00:02 -- accel/accel.sh@20 -- # IFS=: 00:07:32.510 10:00:02 -- accel/accel.sh@20 -- # read -r var val 00:07:32.510 10:00:02 -- accel/accel.sh@21 -- # val= 00:07:32.510 10:00:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.510 10:00:02 -- accel/accel.sh@20 -- # IFS=: 00:07:32.510 10:00:02 -- accel/accel.sh@20 -- # read -r var val 00:07:32.510 10:00:02 -- accel/accel.sh@21 -- # val= 00:07:32.510 10:00:02 -- accel/accel.sh@22 -- # case "$var" in 00:07:32.510 10:00:02 -- accel/accel.sh@20 -- # IFS=: 00:07:32.510 10:00:02 -- accel/accel.sh@20 -- # read -r var val 00:07:32.510 10:00:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:32.510 10:00:02 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:32.510 10:00:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:32.510 00:07:32.510 real 0m2.856s 00:07:32.510 user 0m2.425s 00:07:32.510 sys 0m0.226s 00:07:32.510 10:00:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:32.510 10:00:02 -- common/autotest_common.sh@10 -- # set +x 00:07:32.510 ************************************ 00:07:32.510 END TEST accel_dif_generate_copy 00:07:32.510 ************************************ 00:07:32.510 10:00:02 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:32.510 10:00:02 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:32.510 10:00:02 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:32.510 10:00:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:32.510 10:00:02 -- common/autotest_common.sh@10 -- # set +x 00:07:32.510 ************************************ 00:07:32.510 START TEST accel_comp 00:07:32.510 ************************************ 00:07:32.510 10:00:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:32.510 10:00:02 -- accel/accel.sh@16 -- # local accel_opc 00:07:32.510 10:00:02 -- accel/accel.sh@17 -- # local accel_module 00:07:32.510 10:00:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:32.510 10:00:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:32.511 10:00:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.511 10:00:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.511 10:00:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.511 10:00:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.511 10:00:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.511 10:00:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.511 10:00:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.511 10:00:02 -- accel/accel.sh@42 -- # jq -r . 00:07:32.511 [2024-07-14 10:00:02.706853] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:32.511 [2024-07-14 10:00:02.706942] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71050 ] 00:07:32.511 [2024-07-14 10:00:02.837473] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.511 [2024-07-14 10:00:02.903673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.878 10:00:04 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:33.878 00:07:33.878 SPDK Configuration: 00:07:33.878 Core mask: 0x1 00:07:33.878 00:07:33.878 Accel Perf Configuration: 00:07:33.878 Workload Type: compress 00:07:33.878 Transfer size: 4096 bytes 00:07:33.878 Vector count 1 00:07:33.878 Module: software 00:07:33.878 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:33.878 Queue depth: 32 00:07:33.878 Allocate depth: 32 00:07:33.878 # threads/core: 1 00:07:33.878 Run time: 1 seconds 00:07:33.878 Verify: No 00:07:33.878 00:07:33.878 Running for 1 seconds... 00:07:33.878 00:07:33.878 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:33.878 ------------------------------------------------------------------------------------ 00:07:33.878 0,0 53856/s 224 MiB/s 0 0 00:07:33.878 ==================================================================================== 00:07:33.878 Total 53856/s 210 MiB/s 0 0' 00:07:33.878 10:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.878 10:00:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:33.878 10:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.878 10:00:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.878 10:00:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:33.878 10:00:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.878 10:00:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.878 10:00:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.878 10:00:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.878 10:00:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.878 10:00:04 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.878 10:00:04 -- accel/accel.sh@42 -- # jq -r . 00:07:33.878 [2024-07-14 10:00:04.131154] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:33.878 [2024-07-14 10:00:04.131297] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71070 ] 00:07:33.878 [2024-07-14 10:00:04.269366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.878 [2024-07-14 10:00:04.345805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.878 10:00:04 -- accel/accel.sh@21 -- # val= 00:07:33.878 10:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.878 10:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.878 10:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.878 10:00:04 -- accel/accel.sh@21 -- # val= 00:07:33.878 10:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.878 10:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.878 10:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.878 10:00:04 -- accel/accel.sh@21 -- # val= 00:07:33.878 10:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.878 10:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.878 10:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.878 10:00:04 -- accel/accel.sh@21 -- # val=0x1 00:07:33.878 10:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.878 10:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.878 10:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.878 10:00:04 -- accel/accel.sh@21 -- # val= 00:07:33.878 10:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.878 10:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.878 10:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.878 10:00:04 -- accel/accel.sh@21 -- # val= 00:07:33.878 10:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.878 10:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.878 10:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.878 10:00:04 -- accel/accel.sh@21 -- # val=compress 00:07:33.879 10:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.879 10:00:04 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:33.879 10:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.879 10:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.879 10:00:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:33.879 10:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.879 10:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.879 10:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.879 10:00:04 -- accel/accel.sh@21 -- # val= 00:07:33.879 10:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.879 10:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.879 10:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.879 10:00:04 -- accel/accel.sh@21 -- # val=software 00:07:33.879 10:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.879 10:00:04 -- accel/accel.sh@23 -- # accel_module=software 00:07:33.879 10:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.879 10:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.879 10:00:04 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:33.879 10:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.879 10:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.879 10:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.879 10:00:04 -- accel/accel.sh@21 -- # val=32 00:07:33.879 10:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.879 10:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.879 10:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.879 10:00:04 -- accel/accel.sh@21 -- # val=32 00:07:33.879 10:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.879 10:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.879 10:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.879 10:00:04 -- accel/accel.sh@21 -- # val=1 00:07:33.879 10:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.879 10:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.879 10:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.879 10:00:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:33.879 10:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.879 10:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.879 10:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.879 10:00:04 -- accel/accel.sh@21 -- # val=No 00:07:33.879 10:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.879 10:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.879 10:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.879 10:00:04 -- accel/accel.sh@21 -- # val= 00:07:34.136 10:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.136 10:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:34.136 10:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:34.136 10:00:04 -- accel/accel.sh@21 -- # val= 00:07:34.136 10:00:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.136 10:00:04 -- accel/accel.sh@20 -- # IFS=: 00:07:34.136 10:00:04 -- accel/accel.sh@20 -- # read -r var val 00:07:35.068 10:00:05 -- accel/accel.sh@21 -- # val= 00:07:35.068 10:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.068 10:00:05 -- accel/accel.sh@20 -- # IFS=: 00:07:35.068 10:00:05 -- accel/accel.sh@20 -- # read -r var val 00:07:35.068 10:00:05 -- accel/accel.sh@21 -- # val= 00:07:35.068 10:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.068 10:00:05 -- accel/accel.sh@20 -- # IFS=: 00:07:35.068 10:00:05 -- accel/accel.sh@20 -- # read -r var val 00:07:35.068 10:00:05 -- accel/accel.sh@21 -- # val= 00:07:35.068 10:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.068 10:00:05 -- accel/accel.sh@20 -- # IFS=: 00:07:35.068 10:00:05 -- accel/accel.sh@20 -- # read -r var val 00:07:35.068 10:00:05 -- accel/accel.sh@21 -- # val= 00:07:35.068 10:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.068 10:00:05 -- accel/accel.sh@20 -- # IFS=: 00:07:35.068 10:00:05 -- accel/accel.sh@20 -- # read -r var val 00:07:35.068 10:00:05 -- accel/accel.sh@21 -- # val= 00:07:35.068 10:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.068 10:00:05 -- accel/accel.sh@20 -- # IFS=: 00:07:35.068 10:00:05 -- accel/accel.sh@20 -- # read -r var val 00:07:35.068 10:00:05 -- accel/accel.sh@21 -- # val= 00:07:35.068 10:00:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.068 10:00:05 -- accel/accel.sh@20 -- # IFS=: 00:07:35.068 10:00:05 -- accel/accel.sh@20 -- # read -r var val 00:07:35.069 10:00:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:35.069 10:00:05 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:35.069 10:00:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:35.069 00:07:35.069 real 0m2.872s 00:07:35.069 user 0m2.443s 00:07:35.069 sys 0m0.222s 00:07:35.069 10:00:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.069 10:00:05 -- common/autotest_common.sh@10 -- # set +x 00:07:35.069 ************************************ 00:07:35.069 END TEST accel_comp 00:07:35.069 ************************************ 00:07:35.069 10:00:05 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:35.069 10:00:05 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:35.069 10:00:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:35.069 10:00:05 -- common/autotest_common.sh@10 -- # set +x 00:07:35.069 ************************************ 00:07:35.069 START TEST accel_decomp 00:07:35.069 ************************************ 00:07:35.069 10:00:05 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:35.069 10:00:05 -- accel/accel.sh@16 -- # local accel_opc 00:07:35.069 10:00:05 -- accel/accel.sh@17 -- # local accel_module 00:07:35.069 10:00:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:35.069 10:00:05 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:35.069 10:00:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.069 10:00:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.069 10:00:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.069 10:00:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.069 10:00:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.069 10:00:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.069 10:00:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.069 10:00:05 -- accel/accel.sh@42 -- # jq -r . 00:07:35.326 [2024-07-14 10:00:05.627967] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:35.326 [2024-07-14 10:00:05.628055] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71104 ] 00:07:35.326 [2024-07-14 10:00:05.767423] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.326 [2024-07-14 10:00:05.833542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.697 10:00:07 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:36.697 00:07:36.697 SPDK Configuration: 00:07:36.697 Core mask: 0x1 00:07:36.697 00:07:36.697 Accel Perf Configuration: 00:07:36.697 Workload Type: decompress 00:07:36.697 Transfer size: 4096 bytes 00:07:36.697 Vector count 1 00:07:36.697 Module: software 00:07:36.697 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:36.697 Queue depth: 32 00:07:36.697 Allocate depth: 32 00:07:36.697 # threads/core: 1 00:07:36.697 Run time: 1 seconds 00:07:36.697 Verify: Yes 00:07:36.697 00:07:36.697 Running for 1 seconds... 00:07:36.697 00:07:36.697 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:36.697 ------------------------------------------------------------------------------------ 00:07:36.697 0,0 73984/s 136 MiB/s 0 0 00:07:36.697 ==================================================================================== 00:07:36.697 Total 73984/s 289 MiB/s 0 0' 00:07:36.697 10:00:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:36.697 10:00:07 -- accel/accel.sh@20 -- # IFS=: 00:07:36.697 10:00:07 -- accel/accel.sh@20 -- # read -r var val 00:07:36.697 10:00:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:36.697 10:00:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.697 10:00:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.697 10:00:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.697 10:00:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.697 10:00:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.697 10:00:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.697 10:00:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.697 10:00:07 -- accel/accel.sh@42 -- # jq -r . 00:07:36.697 [2024-07-14 10:00:07.053499] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:36.697 [2024-07-14 10:00:07.053587] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71124 ] 00:07:36.697 [2024-07-14 10:00:07.185303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.955 [2024-07-14 10:00:07.251278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.955 10:00:07 -- accel/accel.sh@21 -- # val= 00:07:36.955 10:00:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # IFS=: 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # read -r var val 00:07:36.955 10:00:07 -- accel/accel.sh@21 -- # val= 00:07:36.955 10:00:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # IFS=: 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # read -r var val 00:07:36.955 10:00:07 -- accel/accel.sh@21 -- # val= 00:07:36.955 10:00:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # IFS=: 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # read -r var val 00:07:36.955 10:00:07 -- accel/accel.sh@21 -- # val=0x1 00:07:36.955 10:00:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # IFS=: 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # read -r var val 00:07:36.955 10:00:07 -- accel/accel.sh@21 -- # val= 00:07:36.955 10:00:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # IFS=: 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # read -r var val 00:07:36.955 10:00:07 -- accel/accel.sh@21 -- # val= 00:07:36.955 10:00:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # IFS=: 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # read -r var val 00:07:36.955 10:00:07 -- accel/accel.sh@21 -- # val=decompress 00:07:36.955 10:00:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.955 10:00:07 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # IFS=: 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # read -r var val 00:07:36.955 10:00:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:36.955 10:00:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # IFS=: 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # read -r var val 00:07:36.955 10:00:07 -- accel/accel.sh@21 -- # val= 00:07:36.955 10:00:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # IFS=: 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # read -r var val 00:07:36.955 10:00:07 -- accel/accel.sh@21 -- # val=software 00:07:36.955 10:00:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.955 10:00:07 -- accel/accel.sh@23 -- # accel_module=software 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # IFS=: 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # read -r var val 00:07:36.955 10:00:07 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:36.955 10:00:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # IFS=: 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # read -r var val 00:07:36.955 10:00:07 -- accel/accel.sh@21 -- # val=32 00:07:36.955 10:00:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # IFS=: 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # read -r var val 00:07:36.955 10:00:07 -- accel/accel.sh@21 -- # val=32 00:07:36.955 10:00:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # IFS=: 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # read -r var val 00:07:36.955 10:00:07 -- accel/accel.sh@21 -- # val=1 00:07:36.955 10:00:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # IFS=: 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # read -r var val 00:07:36.955 10:00:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:36.955 10:00:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # IFS=: 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # read -r var val 00:07:36.955 10:00:07 -- accel/accel.sh@21 -- # val=Yes 00:07:36.955 10:00:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # IFS=: 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # read -r var val 00:07:36.955 10:00:07 -- accel/accel.sh@21 -- # val= 00:07:36.955 10:00:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # IFS=: 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # read -r var val 00:07:36.955 10:00:07 -- accel/accel.sh@21 -- # val= 00:07:36.955 10:00:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # IFS=: 00:07:36.955 10:00:07 -- accel/accel.sh@20 -- # read -r var val 00:07:38.328 10:00:08 -- accel/accel.sh@21 -- # val= 00:07:38.328 10:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.328 10:00:08 -- accel/accel.sh@20 -- # IFS=: 00:07:38.328 10:00:08 -- accel/accel.sh@20 -- # read -r var val 00:07:38.328 10:00:08 -- accel/accel.sh@21 -- # val= 00:07:38.328 10:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.328 10:00:08 -- accel/accel.sh@20 -- # IFS=: 00:07:38.328 10:00:08 -- accel/accel.sh@20 -- # read -r var val 00:07:38.328 10:00:08 -- accel/accel.sh@21 -- # val= 00:07:38.328 10:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.328 10:00:08 -- accel/accel.sh@20 -- # IFS=: 00:07:38.328 10:00:08 -- accel/accel.sh@20 -- # read -r var val 00:07:38.328 10:00:08 -- accel/accel.sh@21 -- # val= 00:07:38.329 10:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.329 10:00:08 -- accel/accel.sh@20 -- # IFS=: 00:07:38.329 10:00:08 -- accel/accel.sh@20 -- # read -r var val 00:07:38.329 10:00:08 -- accel/accel.sh@21 -- # val= 00:07:38.329 10:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.329 10:00:08 -- accel/accel.sh@20 -- # IFS=: 00:07:38.329 10:00:08 -- accel/accel.sh@20 -- # read -r var val 00:07:38.329 10:00:08 -- accel/accel.sh@21 -- # val= 00:07:38.329 10:00:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.329 10:00:08 -- accel/accel.sh@20 -- # IFS=: 00:07:38.329 10:00:08 -- accel/accel.sh@20 -- # read -r var val 00:07:38.329 10:00:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:38.329 10:00:08 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:38.329 10:00:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:38.329 ************************************ 00:07:38.329 END TEST accel_decomp 00:07:38.329 ************************************ 00:07:38.329 00:07:38.329 real 0m2.842s 00:07:38.329 user 0m2.416s 00:07:38.329 sys 0m0.221s 00:07:38.329 10:00:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.329 10:00:08 -- common/autotest_common.sh@10 -- # set +x 00:07:38.329 10:00:08 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:38.329 10:00:08 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:38.329 10:00:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:38.329 10:00:08 -- common/autotest_common.sh@10 -- # set +x 00:07:38.329 ************************************ 00:07:38.329 START TEST accel_decmop_full 00:07:38.329 ************************************ 00:07:38.329 10:00:08 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:38.329 10:00:08 -- accel/accel.sh@16 -- # local accel_opc 00:07:38.329 10:00:08 -- accel/accel.sh@17 -- # local accel_module 00:07:38.329 10:00:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:38.329 10:00:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:38.329 10:00:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.329 10:00:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.329 10:00:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.329 10:00:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.329 10:00:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.329 10:00:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.329 10:00:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.329 10:00:08 -- accel/accel.sh@42 -- # jq -r . 00:07:38.329 [2024-07-14 10:00:08.518622] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:38.329 [2024-07-14 10:00:08.518701] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71157 ] 00:07:38.329 [2024-07-14 10:00:08.651985] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.329 [2024-07-14 10:00:08.721045] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.713 10:00:09 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:39.713 00:07:39.713 SPDK Configuration: 00:07:39.713 Core mask: 0x1 00:07:39.713 00:07:39.713 Accel Perf Configuration: 00:07:39.713 Workload Type: decompress 00:07:39.713 Transfer size: 111250 bytes 00:07:39.713 Vector count 1 00:07:39.713 Module: software 00:07:39.713 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:39.713 Queue depth: 32 00:07:39.713 Allocate depth: 32 00:07:39.713 # threads/core: 1 00:07:39.713 Run time: 1 seconds 00:07:39.713 Verify: Yes 00:07:39.713 00:07:39.713 Running for 1 seconds... 00:07:39.713 00:07:39.713 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:39.713 ------------------------------------------------------------------------------------ 00:07:39.713 0,0 5056/s 208 MiB/s 0 0 00:07:39.713 ==================================================================================== 00:07:39.713 Total 5056/s 536 MiB/s 0 0' 00:07:39.713 10:00:09 -- accel/accel.sh@20 -- # IFS=: 00:07:39.713 10:00:09 -- accel/accel.sh@20 -- # read -r var val 00:07:39.713 10:00:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:39.713 10:00:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:39.713 10:00:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:39.713 10:00:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:39.713 10:00:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.713 10:00:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.713 10:00:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:39.713 10:00:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:39.713 10:00:09 -- accel/accel.sh@41 -- # local IFS=, 00:07:39.713 10:00:09 -- accel/accel.sh@42 -- # jq -r . 00:07:39.713 [2024-07-14 10:00:09.960650] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:39.713 [2024-07-14 10:00:09.960744] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71172 ] 00:07:39.713 [2024-07-14 10:00:10.100723] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.713 [2024-07-14 10:00:10.188958] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.713 10:00:10 -- accel/accel.sh@21 -- # val= 00:07:39.713 10:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.713 10:00:10 -- accel/accel.sh@20 -- # IFS=: 00:07:39.713 10:00:10 -- accel/accel.sh@20 -- # read -r var val 00:07:39.713 10:00:10 -- accel/accel.sh@21 -- # val= 00:07:39.713 10:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.713 10:00:10 -- accel/accel.sh@20 -- # IFS=: 00:07:39.713 10:00:10 -- accel/accel.sh@20 -- # read -r var val 00:07:39.713 10:00:10 -- accel/accel.sh@21 -- # val= 00:07:39.713 10:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.713 10:00:10 -- accel/accel.sh@20 -- # IFS=: 00:07:39.713 10:00:10 -- accel/accel.sh@20 -- # read -r var val 00:07:39.713 10:00:10 -- accel/accel.sh@21 -- # val=0x1 00:07:39.713 10:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.713 10:00:10 -- accel/accel.sh@20 -- # IFS=: 00:07:39.713 10:00:10 -- accel/accel.sh@20 -- # read -r var val 00:07:39.713 10:00:10 -- accel/accel.sh@21 -- # val= 00:07:39.713 10:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.713 10:00:10 -- accel/accel.sh@20 -- # IFS=: 00:07:39.980 10:00:10 -- accel/accel.sh@20 -- # read -r var val 00:07:39.980 10:00:10 -- accel/accel.sh@21 -- # val= 00:07:39.980 10:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.980 10:00:10 -- accel/accel.sh@20 -- # IFS=: 00:07:39.980 10:00:10 -- accel/accel.sh@20 -- # read -r var val 00:07:39.980 10:00:10 -- accel/accel.sh@21 -- # val=decompress 00:07:39.980 10:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.980 10:00:10 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:39.980 10:00:10 -- accel/accel.sh@20 -- # IFS=: 00:07:39.980 10:00:10 -- accel/accel.sh@20 -- # read -r var val 00:07:39.980 10:00:10 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:39.980 10:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.980 10:00:10 -- accel/accel.sh@20 -- # IFS=: 00:07:39.980 10:00:10 -- accel/accel.sh@20 -- # read -r var val 00:07:39.980 10:00:10 -- accel/accel.sh@21 -- # val= 00:07:39.980 10:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.980 10:00:10 -- accel/accel.sh@20 -- # IFS=: 00:07:39.981 10:00:10 -- accel/accel.sh@20 -- # read -r var val 00:07:39.981 10:00:10 -- accel/accel.sh@21 -- # val=software 00:07:39.981 10:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.981 10:00:10 -- accel/accel.sh@23 -- # accel_module=software 00:07:39.981 10:00:10 -- accel/accel.sh@20 -- # IFS=: 00:07:39.981 10:00:10 -- accel/accel.sh@20 -- # read -r var val 00:07:39.981 10:00:10 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:39.981 10:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.981 10:00:10 -- accel/accel.sh@20 -- # IFS=: 00:07:39.981 10:00:10 -- accel/accel.sh@20 -- # read -r var val 00:07:39.981 10:00:10 -- accel/accel.sh@21 -- # val=32 00:07:39.981 10:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.981 10:00:10 -- accel/accel.sh@20 -- # IFS=: 00:07:39.981 10:00:10 -- accel/accel.sh@20 -- # read -r var val 00:07:39.981 10:00:10 -- accel/accel.sh@21 -- # val=32 00:07:39.981 10:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.981 10:00:10 -- accel/accel.sh@20 -- # IFS=: 00:07:39.981 10:00:10 -- accel/accel.sh@20 -- # read -r var val 00:07:39.981 10:00:10 -- accel/accel.sh@21 -- # val=1 00:07:39.981 10:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.981 10:00:10 -- accel/accel.sh@20 -- # IFS=: 00:07:39.981 10:00:10 -- accel/accel.sh@20 -- # read -r var val 00:07:39.981 10:00:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:39.981 10:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.981 10:00:10 -- accel/accel.sh@20 -- # IFS=: 00:07:39.981 10:00:10 -- accel/accel.sh@20 -- # read -r var val 00:07:39.981 10:00:10 -- accel/accel.sh@21 -- # val=Yes 00:07:39.981 10:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.981 10:00:10 -- accel/accel.sh@20 -- # IFS=: 00:07:39.981 10:00:10 -- accel/accel.sh@20 -- # read -r var val 00:07:39.981 10:00:10 -- accel/accel.sh@21 -- # val= 00:07:39.981 10:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.981 10:00:10 -- accel/accel.sh@20 -- # IFS=: 00:07:39.981 10:00:10 -- accel/accel.sh@20 -- # read -r var val 00:07:39.981 10:00:10 -- accel/accel.sh@21 -- # val= 00:07:39.981 10:00:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.981 10:00:10 -- accel/accel.sh@20 -- # IFS=: 00:07:39.981 10:00:10 -- accel/accel.sh@20 -- # read -r var val 00:07:40.913 10:00:11 -- accel/accel.sh@21 -- # val= 00:07:40.913 10:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.913 10:00:11 -- accel/accel.sh@20 -- # IFS=: 00:07:40.913 10:00:11 -- accel/accel.sh@20 -- # read -r var val 00:07:40.913 10:00:11 -- accel/accel.sh@21 -- # val= 00:07:40.913 10:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.914 10:00:11 -- accel/accel.sh@20 -- # IFS=: 00:07:40.914 10:00:11 -- accel/accel.sh@20 -- # read -r var val 00:07:40.914 10:00:11 -- accel/accel.sh@21 -- # val= 00:07:40.914 10:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.914 10:00:11 -- accel/accel.sh@20 -- # IFS=: 00:07:40.914 10:00:11 -- accel/accel.sh@20 -- # read -r var val 00:07:40.914 10:00:11 -- accel/accel.sh@21 -- # val= 00:07:40.914 10:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.914 10:00:11 -- accel/accel.sh@20 -- # IFS=: 00:07:40.914 10:00:11 -- accel/accel.sh@20 -- # read -r var val 00:07:40.914 10:00:11 -- accel/accel.sh@21 -- # val= 00:07:40.914 10:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.914 10:00:11 -- accel/accel.sh@20 -- # IFS=: 00:07:40.914 10:00:11 -- accel/accel.sh@20 -- # read -r var val 00:07:40.914 10:00:11 -- accel/accel.sh@21 -- # val= 00:07:40.914 10:00:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.914 10:00:11 -- accel/accel.sh@20 -- # IFS=: 00:07:40.914 10:00:11 -- accel/accel.sh@20 -- # read -r var val 00:07:40.914 10:00:11 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:40.914 10:00:11 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:40.914 10:00:11 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.914 00:07:40.914 real 0m2.918s 00:07:40.914 user 0m2.472s 00:07:40.914 sys 0m0.237s 00:07:40.914 10:00:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.914 10:00:11 -- common/autotest_common.sh@10 -- # set +x 00:07:40.914 ************************************ 00:07:40.914 END TEST accel_decmop_full 00:07:40.914 ************************************ 00:07:40.914 10:00:11 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:40.914 10:00:11 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:40.915 10:00:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.915 10:00:11 -- common/autotest_common.sh@10 -- # set +x 00:07:41.176 ************************************ 00:07:41.176 START TEST accel_decomp_mcore 00:07:41.176 ************************************ 00:07:41.176 10:00:11 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:41.176 10:00:11 -- accel/accel.sh@16 -- # local accel_opc 00:07:41.176 10:00:11 -- accel/accel.sh@17 -- # local accel_module 00:07:41.176 10:00:11 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:41.176 10:00:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:41.176 10:00:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.176 10:00:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:41.176 10:00:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.176 10:00:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.176 10:00:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:41.176 10:00:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:41.176 10:00:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:41.176 10:00:11 -- accel/accel.sh@42 -- # jq -r . 00:07:41.176 [2024-07-14 10:00:11.491369] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:41.176 [2024-07-14 10:00:11.491472] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71207 ] 00:07:41.176 [2024-07-14 10:00:11.634580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:41.449 [2024-07-14 10:00:11.728259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:41.449 [2024-07-14 10:00:11.728769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:41.449 [2024-07-14 10:00:11.729022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:41.449 [2024-07-14 10:00:11.729255] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.826 10:00:12 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:42.826 00:07:42.826 SPDK Configuration: 00:07:42.826 Core mask: 0xf 00:07:42.826 00:07:42.826 Accel Perf Configuration: 00:07:42.826 Workload Type: decompress 00:07:42.826 Transfer size: 4096 bytes 00:07:42.826 Vector count 1 00:07:42.826 Module: software 00:07:42.826 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:42.826 Queue depth: 32 00:07:42.826 Allocate depth: 32 00:07:42.826 # threads/core: 1 00:07:42.826 Run time: 1 seconds 00:07:42.826 Verify: Yes 00:07:42.826 00:07:42.826 Running for 1 seconds... 00:07:42.826 00:07:42.826 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:42.826 ------------------------------------------------------------------------------------ 00:07:42.826 0,0 56832/s 104 MiB/s 0 0 00:07:42.826 3,0 57248/s 105 MiB/s 0 0 00:07:42.826 2,0 56928/s 104 MiB/s 0 0 00:07:42.826 1,0 55616/s 102 MiB/s 0 0 00:07:42.826 ==================================================================================== 00:07:42.826 Total 226624/s 885 MiB/s 0 0' 00:07:42.826 10:00:12 -- accel/accel.sh@20 -- # IFS=: 00:07:42.826 10:00:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:42.826 10:00:12 -- accel/accel.sh@20 -- # read -r var val 00:07:42.826 10:00:12 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:42.827 10:00:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:42.827 10:00:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:42.827 10:00:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.827 10:00:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.827 10:00:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:42.827 10:00:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:42.827 10:00:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:42.827 10:00:12 -- accel/accel.sh@42 -- # jq -r . 00:07:42.827 [2024-07-14 10:00:12.971824] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:42.827 [2024-07-14 10:00:12.971902] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71229 ] 00:07:42.827 [2024-07-14 10:00:13.106282] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:42.827 [2024-07-14 10:00:13.190238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.827 [2024-07-14 10:00:13.190363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.827 [2024-07-14 10:00:13.190487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:42.827 [2024-07-14 10:00:13.190488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.827 10:00:13 -- accel/accel.sh@21 -- # val= 00:07:42.827 10:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.827 10:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:42.827 10:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:42.827 10:00:13 -- accel/accel.sh@21 -- # val= 00:07:42.827 10:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.827 10:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:42.827 10:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:42.827 10:00:13 -- accel/accel.sh@21 -- # val= 00:07:42.827 10:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.827 10:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:42.827 10:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:42.827 10:00:13 -- accel/accel.sh@21 -- # val=0xf 00:07:42.827 10:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.827 10:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:42.827 10:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:42.828 10:00:13 -- accel/accel.sh@21 -- # val= 00:07:42.828 10:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.828 10:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:42.828 10:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:42.828 10:00:13 -- accel/accel.sh@21 -- # val= 00:07:42.828 10:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.828 10:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:42.828 10:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:42.828 10:00:13 -- accel/accel.sh@21 -- # val=decompress 00:07:42.828 10:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.828 10:00:13 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:42.828 10:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:42.828 10:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:42.828 10:00:13 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:42.828 10:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.828 10:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:42.828 10:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:42.828 10:00:13 -- accel/accel.sh@21 -- # val= 00:07:42.828 10:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.828 10:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:42.828 10:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:42.828 10:00:13 -- accel/accel.sh@21 -- # val=software 00:07:42.828 10:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.828 10:00:13 -- accel/accel.sh@23 -- # accel_module=software 00:07:42.828 10:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:42.828 10:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:42.828 10:00:13 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:42.828 10:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.828 10:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:42.828 10:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:42.828 10:00:13 -- accel/accel.sh@21 -- # val=32 00:07:42.828 10:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.828 10:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:42.828 10:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:42.828 10:00:13 -- accel/accel.sh@21 -- # val=32 00:07:42.828 10:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.828 10:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:42.828 10:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:42.828 10:00:13 -- accel/accel.sh@21 -- # val=1 00:07:42.828 10:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.828 10:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:42.828 10:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:42.828 10:00:13 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:42.828 10:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.828 10:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:42.828 10:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:42.828 10:00:13 -- accel/accel.sh@21 -- # val=Yes 00:07:42.828 10:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.828 10:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:42.828 10:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:42.828 10:00:13 -- accel/accel.sh@21 -- # val= 00:07:42.828 10:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.828 10:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:42.828 10:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:42.828 10:00:13 -- accel/accel.sh@21 -- # val= 00:07:42.828 10:00:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.828 10:00:13 -- accel/accel.sh@20 -- # IFS=: 00:07:42.828 10:00:13 -- accel/accel.sh@20 -- # read -r var val 00:07:44.203 10:00:14 -- accel/accel.sh@21 -- # val= 00:07:44.203 10:00:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.203 10:00:14 -- accel/accel.sh@20 -- # IFS=: 00:07:44.203 10:00:14 -- accel/accel.sh@20 -- # read -r var val 00:07:44.203 10:00:14 -- accel/accel.sh@21 -- # val= 00:07:44.203 10:00:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.203 10:00:14 -- accel/accel.sh@20 -- # IFS=: 00:07:44.203 10:00:14 -- accel/accel.sh@20 -- # read -r var val 00:07:44.203 10:00:14 -- accel/accel.sh@21 -- # val= 00:07:44.203 10:00:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.203 10:00:14 -- accel/accel.sh@20 -- # IFS=: 00:07:44.203 10:00:14 -- accel/accel.sh@20 -- # read -r var val 00:07:44.203 10:00:14 -- accel/accel.sh@21 -- # val= 00:07:44.203 10:00:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.203 10:00:14 -- accel/accel.sh@20 -- # IFS=: 00:07:44.203 10:00:14 -- accel/accel.sh@20 -- # read -r var val 00:07:44.203 10:00:14 -- accel/accel.sh@21 -- # val= 00:07:44.203 10:00:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.203 10:00:14 -- accel/accel.sh@20 -- # IFS=: 00:07:44.203 10:00:14 -- accel/accel.sh@20 -- # read -r var val 00:07:44.203 10:00:14 -- accel/accel.sh@21 -- # val= 00:07:44.203 10:00:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.203 10:00:14 -- accel/accel.sh@20 -- # IFS=: 00:07:44.203 10:00:14 -- accel/accel.sh@20 -- # read -r var val 00:07:44.203 10:00:14 -- accel/accel.sh@21 -- # val= 00:07:44.203 10:00:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.203 10:00:14 -- accel/accel.sh@20 -- # IFS=: 00:07:44.203 10:00:14 -- accel/accel.sh@20 -- # read -r var val 00:07:44.203 10:00:14 -- accel/accel.sh@21 -- # val= 00:07:44.203 10:00:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.203 10:00:14 -- accel/accel.sh@20 -- # IFS=: 00:07:44.203 10:00:14 -- accel/accel.sh@20 -- # read -r var val 00:07:44.203 10:00:14 -- accel/accel.sh@21 -- # val= 00:07:44.203 10:00:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.203 10:00:14 -- accel/accel.sh@20 -- # IFS=: 00:07:44.203 10:00:14 -- accel/accel.sh@20 -- # read -r var val 00:07:44.203 10:00:14 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:44.203 10:00:14 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:44.203 10:00:14 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:44.203 00:07:44.203 real 0m2.961s 00:07:44.203 user 0m9.303s 00:07:44.203 sys 0m0.278s 00:07:44.203 10:00:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.203 ************************************ 00:07:44.203 10:00:14 -- common/autotest_common.sh@10 -- # set +x 00:07:44.203 END TEST accel_decomp_mcore 00:07:44.203 ************************************ 00:07:44.203 10:00:14 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:44.203 10:00:14 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:44.203 10:00:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:44.203 10:00:14 -- common/autotest_common.sh@10 -- # set +x 00:07:44.203 ************************************ 00:07:44.203 START TEST accel_decomp_full_mcore 00:07:44.203 ************************************ 00:07:44.203 10:00:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:44.203 10:00:14 -- accel/accel.sh@16 -- # local accel_opc 00:07:44.203 10:00:14 -- accel/accel.sh@17 -- # local accel_module 00:07:44.203 10:00:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:44.203 10:00:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:44.203 10:00:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.203 10:00:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:44.203 10:00:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.203 10:00:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.203 10:00:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:44.203 10:00:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:44.203 10:00:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:44.203 10:00:14 -- accel/accel.sh@42 -- # jq -r . 00:07:44.203 [2024-07-14 10:00:14.508646] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:44.203 [2024-07-14 10:00:14.508734] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71267 ] 00:07:44.203 [2024-07-14 10:00:14.650435] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:44.203 [2024-07-14 10:00:14.732438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.203 [2024-07-14 10:00:14.732578] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.203 [2024-07-14 10:00:14.732740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:44.203 [2024-07-14 10:00:14.732745] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.570 10:00:15 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:45.570 00:07:45.570 SPDK Configuration: 00:07:45.570 Core mask: 0xf 00:07:45.570 00:07:45.570 Accel Perf Configuration: 00:07:45.570 Workload Type: decompress 00:07:45.570 Transfer size: 111250 bytes 00:07:45.570 Vector count 1 00:07:45.570 Module: software 00:07:45.570 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:45.570 Queue depth: 32 00:07:45.570 Allocate depth: 32 00:07:45.570 # threads/core: 1 00:07:45.570 Run time: 1 seconds 00:07:45.570 Verify: Yes 00:07:45.570 00:07:45.570 Running for 1 seconds... 00:07:45.570 00:07:45.570 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:45.570 ------------------------------------------------------------------------------------ 00:07:45.570 0,0 4320/s 178 MiB/s 0 0 00:07:45.570 3,0 4224/s 174 MiB/s 0 0 00:07:45.570 2,0 4256/s 175 MiB/s 0 0 00:07:45.570 1,0 4256/s 175 MiB/s 0 0 00:07:45.570 ==================================================================================== 00:07:45.570 Total 17056/s 1809 MiB/s 0 0' 00:07:45.570 10:00:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:45.570 10:00:15 -- accel/accel.sh@20 -- # IFS=: 00:07:45.570 10:00:15 -- accel/accel.sh@20 -- # read -r var val 00:07:45.570 10:00:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:45.570 10:00:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.570 10:00:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.570 10:00:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.570 10:00:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.570 10:00:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.570 10:00:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.570 10:00:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.570 10:00:15 -- accel/accel.sh@42 -- # jq -r . 00:07:45.570 [2024-07-14 10:00:15.975403] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:45.570 [2024-07-14 10:00:15.975489] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71289 ] 00:07:45.570 [2024-07-14 10:00:16.108388] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:45.827 [2024-07-14 10:00:16.171354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:45.827 [2024-07-14 10:00:16.171500] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:45.827 [2024-07-14 10:00:16.171612] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:45.827 [2024-07-14 10:00:16.171614] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.827 10:00:16 -- accel/accel.sh@21 -- # val= 00:07:45.827 10:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.827 10:00:16 -- accel/accel.sh@21 -- # val= 00:07:45.827 10:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.827 10:00:16 -- accel/accel.sh@21 -- # val= 00:07:45.827 10:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.827 10:00:16 -- accel/accel.sh@21 -- # val=0xf 00:07:45.827 10:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.827 10:00:16 -- accel/accel.sh@21 -- # val= 00:07:45.827 10:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.827 10:00:16 -- accel/accel.sh@21 -- # val= 00:07:45.827 10:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.827 10:00:16 -- accel/accel.sh@21 -- # val=decompress 00:07:45.827 10:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.827 10:00:16 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.827 10:00:16 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:45.827 10:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.827 10:00:16 -- accel/accel.sh@21 -- # val= 00:07:45.827 10:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.827 10:00:16 -- accel/accel.sh@21 -- # val=software 00:07:45.827 10:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.827 10:00:16 -- accel/accel.sh@23 -- # accel_module=software 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.827 10:00:16 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:45.827 10:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.827 10:00:16 -- accel/accel.sh@21 -- # val=32 00:07:45.827 10:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.827 10:00:16 -- accel/accel.sh@21 -- # val=32 00:07:45.827 10:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.827 10:00:16 -- accel/accel.sh@21 -- # val=1 00:07:45.827 10:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.827 10:00:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:45.827 10:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.827 10:00:16 -- accel/accel.sh@21 -- # val=Yes 00:07:45.827 10:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.827 10:00:16 -- accel/accel.sh@21 -- # val= 00:07:45.827 10:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.827 10:00:16 -- accel/accel.sh@21 -- # val= 00:07:45.827 10:00:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.827 10:00:16 -- accel/accel.sh@20 -- # read -r var val 00:07:47.196 10:00:17 -- accel/accel.sh@21 -- # val= 00:07:47.196 10:00:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.196 10:00:17 -- accel/accel.sh@20 -- # IFS=: 00:07:47.196 10:00:17 -- accel/accel.sh@20 -- # read -r var val 00:07:47.196 10:00:17 -- accel/accel.sh@21 -- # val= 00:07:47.196 10:00:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.196 10:00:17 -- accel/accel.sh@20 -- # IFS=: 00:07:47.196 10:00:17 -- accel/accel.sh@20 -- # read -r var val 00:07:47.196 10:00:17 -- accel/accel.sh@21 -- # val= 00:07:47.196 10:00:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.196 10:00:17 -- accel/accel.sh@20 -- # IFS=: 00:07:47.196 10:00:17 -- accel/accel.sh@20 -- # read -r var val 00:07:47.196 10:00:17 -- accel/accel.sh@21 -- # val= 00:07:47.196 10:00:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.196 10:00:17 -- accel/accel.sh@20 -- # IFS=: 00:07:47.196 10:00:17 -- accel/accel.sh@20 -- # read -r var val 00:07:47.196 10:00:17 -- accel/accel.sh@21 -- # val= 00:07:47.196 10:00:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.196 10:00:17 -- accel/accel.sh@20 -- # IFS=: 00:07:47.196 10:00:17 -- accel/accel.sh@20 -- # read -r var val 00:07:47.196 10:00:17 -- accel/accel.sh@21 -- # val= 00:07:47.196 10:00:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.196 10:00:17 -- accel/accel.sh@20 -- # IFS=: 00:07:47.196 10:00:17 -- accel/accel.sh@20 -- # read -r var val 00:07:47.196 10:00:17 -- accel/accel.sh@21 -- # val= 00:07:47.196 10:00:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.196 10:00:17 -- accel/accel.sh@20 -- # IFS=: 00:07:47.196 10:00:17 -- accel/accel.sh@20 -- # read -r var val 00:07:47.196 10:00:17 -- accel/accel.sh@21 -- # val= 00:07:47.196 10:00:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.196 10:00:17 -- accel/accel.sh@20 -- # IFS=: 00:07:47.196 10:00:17 -- accel/accel.sh@20 -- # read -r var val 00:07:47.196 10:00:17 -- accel/accel.sh@21 -- # val= 00:07:47.196 10:00:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:47.196 10:00:17 -- accel/accel.sh@20 -- # IFS=: 00:07:47.196 10:00:17 -- accel/accel.sh@20 -- # read -r var val 00:07:47.196 ************************************ 00:07:47.196 END TEST accel_decomp_full_mcore 00:07:47.196 ************************************ 00:07:47.196 10:00:17 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:47.196 10:00:17 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:47.196 10:00:17 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:47.196 00:07:47.196 real 0m2.916s 00:07:47.196 user 0m9.347s 00:07:47.196 sys 0m0.256s 00:07:47.196 10:00:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.196 10:00:17 -- common/autotest_common.sh@10 -- # set +x 00:07:47.196 10:00:17 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:47.196 10:00:17 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:47.196 10:00:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:47.196 10:00:17 -- common/autotest_common.sh@10 -- # set +x 00:07:47.196 ************************************ 00:07:47.196 START TEST accel_decomp_mthread 00:07:47.196 ************************************ 00:07:47.196 10:00:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:47.196 10:00:17 -- accel/accel.sh@16 -- # local accel_opc 00:07:47.196 10:00:17 -- accel/accel.sh@17 -- # local accel_module 00:07:47.196 10:00:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:47.196 10:00:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:47.196 10:00:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:47.196 10:00:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:47.196 10:00:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:47.196 10:00:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:47.196 10:00:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:47.196 10:00:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:47.196 10:00:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:47.196 10:00:17 -- accel/accel.sh@42 -- # jq -r . 00:07:47.196 [2024-07-14 10:00:17.473636] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:47.196 [2024-07-14 10:00:17.473712] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71327 ] 00:07:47.196 [2024-07-14 10:00:17.607631] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.196 [2024-07-14 10:00:17.665433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.566 10:00:18 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:48.566 00:07:48.566 SPDK Configuration: 00:07:48.566 Core mask: 0x1 00:07:48.566 00:07:48.566 Accel Perf Configuration: 00:07:48.566 Workload Type: decompress 00:07:48.566 Transfer size: 4096 bytes 00:07:48.566 Vector count 1 00:07:48.566 Module: software 00:07:48.566 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:48.566 Queue depth: 32 00:07:48.566 Allocate depth: 32 00:07:48.566 # threads/core: 2 00:07:48.566 Run time: 1 seconds 00:07:48.566 Verify: Yes 00:07:48.566 00:07:48.566 Running for 1 seconds... 00:07:48.566 00:07:48.566 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:48.567 ------------------------------------------------------------------------------------ 00:07:48.567 0,1 33824/s 62 MiB/s 0 0 00:07:48.567 0,0 33696/s 62 MiB/s 0 0 00:07:48.567 ==================================================================================== 00:07:48.567 Total 67520/s 263 MiB/s 0 0' 00:07:48.567 10:00:18 -- accel/accel.sh@20 -- # IFS=: 00:07:48.567 10:00:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:48.567 10:00:18 -- accel/accel.sh@20 -- # read -r var val 00:07:48.567 10:00:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:48.567 10:00:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:48.567 10:00:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:48.567 10:00:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.567 10:00:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.567 10:00:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:48.567 10:00:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:48.567 10:00:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:48.567 10:00:18 -- accel/accel.sh@42 -- # jq -r . 00:07:48.567 [2024-07-14 10:00:18.894316] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:48.567 [2024-07-14 10:00:18.894415] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71341 ] 00:07:48.567 [2024-07-14 10:00:19.029048] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.567 [2024-07-14 10:00:19.093550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.822 10:00:19 -- accel/accel.sh@21 -- # val= 00:07:48.822 10:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.822 10:00:19 -- accel/accel.sh@20 -- # IFS=: 00:07:48.822 10:00:19 -- accel/accel.sh@20 -- # read -r var val 00:07:48.822 10:00:19 -- accel/accel.sh@21 -- # val= 00:07:48.822 10:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.822 10:00:19 -- accel/accel.sh@20 -- # IFS=: 00:07:48.822 10:00:19 -- accel/accel.sh@20 -- # read -r var val 00:07:48.822 10:00:19 -- accel/accel.sh@21 -- # val= 00:07:48.822 10:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.822 10:00:19 -- accel/accel.sh@20 -- # IFS=: 00:07:48.822 10:00:19 -- accel/accel.sh@20 -- # read -r var val 00:07:48.822 10:00:19 -- accel/accel.sh@21 -- # val=0x1 00:07:48.822 10:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.822 10:00:19 -- accel/accel.sh@20 -- # IFS=: 00:07:48.822 10:00:19 -- accel/accel.sh@20 -- # read -r var val 00:07:48.822 10:00:19 -- accel/accel.sh@21 -- # val= 00:07:48.822 10:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.822 10:00:19 -- accel/accel.sh@20 -- # IFS=: 00:07:48.822 10:00:19 -- accel/accel.sh@20 -- # read -r var val 00:07:48.822 10:00:19 -- accel/accel.sh@21 -- # val= 00:07:48.822 10:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.822 10:00:19 -- accel/accel.sh@20 -- # IFS=: 00:07:48.822 10:00:19 -- accel/accel.sh@20 -- # read -r var val 00:07:48.822 10:00:19 -- accel/accel.sh@21 -- # val=decompress 00:07:48.822 10:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.822 10:00:19 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:48.822 10:00:19 -- accel/accel.sh@20 -- # IFS=: 00:07:48.822 10:00:19 -- accel/accel.sh@20 -- # read -r var val 00:07:48.822 10:00:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:48.822 10:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.822 10:00:19 -- accel/accel.sh@20 -- # IFS=: 00:07:48.822 10:00:19 -- accel/accel.sh@20 -- # read -r var val 00:07:48.822 10:00:19 -- accel/accel.sh@21 -- # val= 00:07:48.822 10:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.822 10:00:19 -- accel/accel.sh@20 -- # IFS=: 00:07:48.822 10:00:19 -- accel/accel.sh@20 -- # read -r var val 00:07:48.822 10:00:19 -- accel/accel.sh@21 -- # val=software 00:07:48.822 10:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.823 10:00:19 -- accel/accel.sh@23 -- # accel_module=software 00:07:48.823 10:00:19 -- accel/accel.sh@20 -- # IFS=: 00:07:48.823 10:00:19 -- accel/accel.sh@20 -- # read -r var val 00:07:48.823 10:00:19 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:48.823 10:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.823 10:00:19 -- accel/accel.sh@20 -- # IFS=: 00:07:48.823 10:00:19 -- accel/accel.sh@20 -- # read -r var val 00:07:48.823 10:00:19 -- accel/accel.sh@21 -- # val=32 00:07:48.823 10:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.823 10:00:19 -- accel/accel.sh@20 -- # IFS=: 00:07:48.823 10:00:19 -- accel/accel.sh@20 -- # read -r var val 00:07:48.823 10:00:19 -- accel/accel.sh@21 -- # val=32 00:07:48.823 10:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.823 10:00:19 -- accel/accel.sh@20 -- # IFS=: 00:07:48.823 10:00:19 -- accel/accel.sh@20 -- # read -r var val 00:07:48.823 10:00:19 -- accel/accel.sh@21 -- # val=2 00:07:48.823 10:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.823 10:00:19 -- accel/accel.sh@20 -- # IFS=: 00:07:48.823 10:00:19 -- accel/accel.sh@20 -- # read -r var val 00:07:48.823 10:00:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:48.823 10:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.823 10:00:19 -- accel/accel.sh@20 -- # IFS=: 00:07:48.823 10:00:19 -- accel/accel.sh@20 -- # read -r var val 00:07:48.823 10:00:19 -- accel/accel.sh@21 -- # val=Yes 00:07:48.823 10:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.823 10:00:19 -- accel/accel.sh@20 -- # IFS=: 00:07:48.823 10:00:19 -- accel/accel.sh@20 -- # read -r var val 00:07:48.823 10:00:19 -- accel/accel.sh@21 -- # val= 00:07:48.823 10:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.823 10:00:19 -- accel/accel.sh@20 -- # IFS=: 00:07:48.823 10:00:19 -- accel/accel.sh@20 -- # read -r var val 00:07:48.823 10:00:19 -- accel/accel.sh@21 -- # val= 00:07:48.823 10:00:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.823 10:00:19 -- accel/accel.sh@20 -- # IFS=: 00:07:48.823 10:00:19 -- accel/accel.sh@20 -- # read -r var val 00:07:49.751 10:00:20 -- accel/accel.sh@21 -- # val= 00:07:49.751 10:00:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.751 10:00:20 -- accel/accel.sh@20 -- # IFS=: 00:07:49.751 10:00:20 -- accel/accel.sh@20 -- # read -r var val 00:07:49.751 10:00:20 -- accel/accel.sh@21 -- # val= 00:07:49.751 10:00:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.751 10:00:20 -- accel/accel.sh@20 -- # IFS=: 00:07:49.751 10:00:20 -- accel/accel.sh@20 -- # read -r var val 00:07:50.008 10:00:20 -- accel/accel.sh@21 -- # val= 00:07:50.008 10:00:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.008 10:00:20 -- accel/accel.sh@20 -- # IFS=: 00:07:50.008 10:00:20 -- accel/accel.sh@20 -- # read -r var val 00:07:50.008 10:00:20 -- accel/accel.sh@21 -- # val= 00:07:50.008 10:00:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.008 10:00:20 -- accel/accel.sh@20 -- # IFS=: 00:07:50.008 10:00:20 -- accel/accel.sh@20 -- # read -r var val 00:07:50.008 10:00:20 -- accel/accel.sh@21 -- # val= 00:07:50.008 10:00:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.008 10:00:20 -- accel/accel.sh@20 -- # IFS=: 00:07:50.008 10:00:20 -- accel/accel.sh@20 -- # read -r var val 00:07:50.008 10:00:20 -- accel/accel.sh@21 -- # val= 00:07:50.008 10:00:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.008 10:00:20 -- accel/accel.sh@20 -- # IFS=: 00:07:50.008 10:00:20 -- accel/accel.sh@20 -- # read -r var val 00:07:50.008 10:00:20 -- accel/accel.sh@21 -- # val= 00:07:50.008 10:00:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.008 10:00:20 -- accel/accel.sh@20 -- # IFS=: 00:07:50.008 10:00:20 -- accel/accel.sh@20 -- # read -r var val 00:07:50.008 10:00:20 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:50.008 10:00:20 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:50.008 10:00:20 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.008 00:07:50.008 real 0m2.853s 00:07:50.008 user 0m2.421s 00:07:50.008 sys 0m0.223s 00:07:50.008 10:00:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.008 10:00:20 -- common/autotest_common.sh@10 -- # set +x 00:07:50.008 ************************************ 00:07:50.008 END TEST accel_decomp_mthread 00:07:50.008 ************************************ 00:07:50.008 10:00:20 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:50.008 10:00:20 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:50.008 10:00:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:50.008 10:00:20 -- common/autotest_common.sh@10 -- # set +x 00:07:50.008 ************************************ 00:07:50.008 START TEST accel_deomp_full_mthread 00:07:50.008 ************************************ 00:07:50.008 10:00:20 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:50.008 10:00:20 -- accel/accel.sh@16 -- # local accel_opc 00:07:50.008 10:00:20 -- accel/accel.sh@17 -- # local accel_module 00:07:50.008 10:00:20 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:50.008 10:00:20 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:50.008 10:00:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.008 10:00:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:50.008 10:00:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.008 10:00:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.008 10:00:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:50.008 10:00:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:50.008 10:00:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:50.008 10:00:20 -- accel/accel.sh@42 -- # jq -r . 00:07:50.008 [2024-07-14 10:00:20.386388] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:50.008 [2024-07-14 10:00:20.386502] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71381 ] 00:07:50.008 [2024-07-14 10:00:20.523248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.266 [2024-07-14 10:00:20.582358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.636 10:00:21 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:51.636 00:07:51.636 SPDK Configuration: 00:07:51.636 Core mask: 0x1 00:07:51.636 00:07:51.636 Accel Perf Configuration: 00:07:51.636 Workload Type: decompress 00:07:51.636 Transfer size: 111250 bytes 00:07:51.636 Vector count 1 00:07:51.636 Module: software 00:07:51.636 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:51.636 Queue depth: 32 00:07:51.636 Allocate depth: 32 00:07:51.636 # threads/core: 2 00:07:51.636 Run time: 1 seconds 00:07:51.636 Verify: Yes 00:07:51.636 00:07:51.636 Running for 1 seconds... 00:07:51.636 00:07:51.636 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:51.636 ------------------------------------------------------------------------------------ 00:07:51.636 0,1 2304/s 95 MiB/s 0 0 00:07:51.636 0,0 2272/s 93 MiB/s 0 0 00:07:51.636 ==================================================================================== 00:07:51.636 Total 4576/s 485 MiB/s 0 0' 00:07:51.636 10:00:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:51.636 10:00:21 -- accel/accel.sh@20 -- # IFS=: 00:07:51.636 10:00:21 -- accel/accel.sh@20 -- # read -r var val 00:07:51.636 10:00:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:51.636 10:00:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:51.636 10:00:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:51.636 10:00:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:51.636 10:00:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:51.636 10:00:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:51.636 10:00:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:51.636 10:00:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:51.636 10:00:21 -- accel/accel.sh@42 -- # jq -r . 00:07:51.636 [2024-07-14 10:00:21.844501] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:51.636 [2024-07-14 10:00:21.844575] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71395 ] 00:07:51.636 [2024-07-14 10:00:21.975574] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.636 [2024-07-14 10:00:22.038032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.636 10:00:22 -- accel/accel.sh@21 -- # val= 00:07:51.636 10:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:51.636 10:00:22 -- accel/accel.sh@21 -- # val= 00:07:51.636 10:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:51.636 10:00:22 -- accel/accel.sh@21 -- # val= 00:07:51.636 10:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:51.636 10:00:22 -- accel/accel.sh@21 -- # val=0x1 00:07:51.636 10:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:51.636 10:00:22 -- accel/accel.sh@21 -- # val= 00:07:51.636 10:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:51.636 10:00:22 -- accel/accel.sh@21 -- # val= 00:07:51.636 10:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:51.636 10:00:22 -- accel/accel.sh@21 -- # val=decompress 00:07:51.636 10:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.636 10:00:22 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:51.636 10:00:22 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:51.636 10:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:51.636 10:00:22 -- accel/accel.sh@21 -- # val= 00:07:51.636 10:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:51.636 10:00:22 -- accel/accel.sh@21 -- # val=software 00:07:51.636 10:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.636 10:00:22 -- accel/accel.sh@23 -- # accel_module=software 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:51.636 10:00:22 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:51.636 10:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:51.636 10:00:22 -- accel/accel.sh@21 -- # val=32 00:07:51.636 10:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:51.636 10:00:22 -- accel/accel.sh@21 -- # val=32 00:07:51.636 10:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:51.636 10:00:22 -- accel/accel.sh@21 -- # val=2 00:07:51.636 10:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:51.636 10:00:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:51.636 10:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:51.636 10:00:22 -- accel/accel.sh@21 -- # val=Yes 00:07:51.636 10:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:51.636 10:00:22 -- accel/accel.sh@21 -- # val= 00:07:51.636 10:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:51.636 10:00:22 -- accel/accel.sh@21 -- # val= 00:07:51.636 10:00:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # IFS=: 00:07:51.636 10:00:22 -- accel/accel.sh@20 -- # read -r var val 00:07:53.006 10:00:23 -- accel/accel.sh@21 -- # val= 00:07:53.006 10:00:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.006 10:00:23 -- accel/accel.sh@20 -- # IFS=: 00:07:53.006 10:00:23 -- accel/accel.sh@20 -- # read -r var val 00:07:53.006 10:00:23 -- accel/accel.sh@21 -- # val= 00:07:53.006 10:00:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.006 10:00:23 -- accel/accel.sh@20 -- # IFS=: 00:07:53.006 10:00:23 -- accel/accel.sh@20 -- # read -r var val 00:07:53.006 10:00:23 -- accel/accel.sh@21 -- # val= 00:07:53.006 10:00:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.006 10:00:23 -- accel/accel.sh@20 -- # IFS=: 00:07:53.006 10:00:23 -- accel/accel.sh@20 -- # read -r var val 00:07:53.006 10:00:23 -- accel/accel.sh@21 -- # val= 00:07:53.006 10:00:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.006 10:00:23 -- accel/accel.sh@20 -- # IFS=: 00:07:53.006 10:00:23 -- accel/accel.sh@20 -- # read -r var val 00:07:53.006 10:00:23 -- accel/accel.sh@21 -- # val= 00:07:53.006 10:00:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.006 10:00:23 -- accel/accel.sh@20 -- # IFS=: 00:07:53.006 10:00:23 -- accel/accel.sh@20 -- # read -r var val 00:07:53.006 10:00:23 -- accel/accel.sh@21 -- # val= 00:07:53.006 10:00:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.006 10:00:23 -- accel/accel.sh@20 -- # IFS=: 00:07:53.006 10:00:23 -- accel/accel.sh@20 -- # read -r var val 00:07:53.006 10:00:23 -- accel/accel.sh@21 -- # val= 00:07:53.006 10:00:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.006 10:00:23 -- accel/accel.sh@20 -- # IFS=: 00:07:53.006 10:00:23 -- accel/accel.sh@20 -- # read -r var val 00:07:53.006 10:00:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:53.006 10:00:23 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:53.006 10:00:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:53.006 00:07:53.006 real 0m2.915s 00:07:53.006 user 0m2.496s 00:07:53.006 sys 0m0.217s 00:07:53.006 10:00:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.006 10:00:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.006 ************************************ 00:07:53.006 END TEST accel_deomp_full_mthread 00:07:53.006 ************************************ 00:07:53.006 10:00:23 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:53.006 10:00:23 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:53.006 10:00:23 -- accel/accel.sh@129 -- # build_accel_config 00:07:53.006 10:00:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:53.006 10:00:23 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:53.006 10:00:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.006 10:00:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:53.006 10:00:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.006 10:00:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.006 10:00:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:53.006 10:00:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:53.006 10:00:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:53.006 10:00:23 -- accel/accel.sh@42 -- # jq -r . 00:07:53.006 ************************************ 00:07:53.006 START TEST accel_dif_functional_tests 00:07:53.006 ************************************ 00:07:53.006 10:00:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:53.006 [2024-07-14 10:00:23.374014] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:53.006 [2024-07-14 10:00:23.374102] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71436 ] 00:07:53.006 [2024-07-14 10:00:23.511880] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:53.263 [2024-07-14 10:00:23.575497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.263 [2024-07-14 10:00:23.575595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.263 [2024-07-14 10:00:23.575599] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.263 00:07:53.263 00:07:53.263 CUnit - A unit testing framework for C - Version 2.1-3 00:07:53.263 http://cunit.sourceforge.net/ 00:07:53.263 00:07:53.263 00:07:53.263 Suite: accel_dif 00:07:53.263 Test: verify: DIF generated, GUARD check ...passed 00:07:53.263 Test: verify: DIF generated, APPTAG check ...passed 00:07:53.263 Test: verify: DIF generated, REFTAG check ...passed 00:07:53.263 Test: verify: DIF not generated, GUARD check ...passed 00:07:53.263 Test: verify: DIF not generated, APPTAG check ...[2024-07-14 10:00:23.667157] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:53.263 [2024-07-14 10:00:23.667295] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:53.263 [2024-07-14 10:00:23.667337] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:53.263 passed 00:07:53.263 Test: verify: DIF not generated, REFTAG check ...passed 00:07:53.263 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:53.263 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-14 10:00:23.667366] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:53.263 [2024-07-14 10:00:23.667394] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:53.263 [2024-07-14 10:00:23.667419] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:53.263 [2024-07-14 10:00:23.667484] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:53.263 passed 00:07:53.263 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:53.263 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:53.263 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:53.263 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-14 10:00:23.667806] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:53.263 passed 00:07:53.263 Test: generate copy: DIF generated, GUARD check ...passed 00:07:53.263 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:53.263 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:53.263 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:53.263 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:53.263 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:53.263 Test: generate copy: iovecs-len validate ...[2024-07-14 10:00:23.668287] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:53.263 passed 00:07:53.263 Test: generate copy: buffer alignment validate ...passed 00:07:53.263 00:07:53.263 Run Summary: Type Total Ran Passed Failed Inactive 00:07:53.263 suites 1 1 n/a 0 0 00:07:53.263 tests 20 20 20 0 0 00:07:53.263 asserts 204 204 204 0 n/a 00:07:53.263 00:07:53.263 Elapsed time = 0.005 seconds 00:07:53.519 ************************************ 00:07:53.519 END TEST accel_dif_functional_tests 00:07:53.519 ************************************ 00:07:53.519 00:07:53.519 real 0m0.554s 00:07:53.519 user 0m0.730s 00:07:53.519 sys 0m0.166s 00:07:53.519 10:00:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.519 10:00:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.519 00:07:53.519 real 1m4.060s 00:07:53.520 user 1m7.415s 00:07:53.520 sys 0m6.920s 00:07:53.520 ************************************ 00:07:53.520 END TEST accel 00:07:53.520 ************************************ 00:07:53.520 10:00:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.520 10:00:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.520 10:00:23 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:53.520 10:00:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:53.520 10:00:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:53.520 10:00:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.520 ************************************ 00:07:53.520 START TEST accel_rpc 00:07:53.520 ************************************ 00:07:53.520 10:00:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:53.520 * Looking for test storage... 00:07:53.520 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:53.520 10:00:24 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:53.520 10:00:24 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=71494 00:07:53.520 10:00:24 -- accel/accel_rpc.sh@15 -- # waitforlisten 71494 00:07:53.520 10:00:24 -- common/autotest_common.sh@819 -- # '[' -z 71494 ']' 00:07:53.520 10:00:24 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:53.520 10:00:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.520 10:00:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:53.520 10:00:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.520 10:00:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:53.520 10:00:24 -- common/autotest_common.sh@10 -- # set +x 00:07:53.776 [2024-07-14 10:00:24.106961] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:53.776 [2024-07-14 10:00:24.107049] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71494 ] 00:07:53.776 [2024-07-14 10:00:24.242782] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.776 [2024-07-14 10:00:24.320487] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:53.776 [2024-07-14 10:00:24.320644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.707 10:00:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:54.707 10:00:25 -- common/autotest_common.sh@852 -- # return 0 00:07:54.707 10:00:25 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:54.707 10:00:25 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:54.707 10:00:25 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:54.707 10:00:25 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:54.707 10:00:25 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:54.707 10:00:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:54.707 10:00:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:54.707 10:00:25 -- common/autotest_common.sh@10 -- # set +x 00:07:54.707 ************************************ 00:07:54.707 START TEST accel_assign_opcode 00:07:54.707 ************************************ 00:07:54.707 10:00:25 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:54.707 10:00:25 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:54.707 10:00:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:54.707 10:00:25 -- common/autotest_common.sh@10 -- # set +x 00:07:54.707 [2024-07-14 10:00:25.109125] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:54.707 10:00:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:54.707 10:00:25 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:54.707 10:00:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:54.707 10:00:25 -- common/autotest_common.sh@10 -- # set +x 00:07:54.707 [2024-07-14 10:00:25.117125] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:54.707 10:00:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:54.707 10:00:25 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:54.707 10:00:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:54.707 10:00:25 -- common/autotest_common.sh@10 -- # set +x 00:07:54.964 10:00:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:54.964 10:00:25 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:54.964 10:00:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:54.964 10:00:25 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:54.964 10:00:25 -- accel/accel_rpc.sh@42 -- # grep software 00:07:54.964 10:00:25 -- common/autotest_common.sh@10 -- # set +x 00:07:54.964 10:00:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:54.964 software 00:07:54.964 00:07:54.964 real 0m0.302s 00:07:54.964 user 0m0.058s 00:07:54.964 sys 0m0.007s 00:07:54.964 10:00:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.964 10:00:25 -- common/autotest_common.sh@10 -- # set +x 00:07:54.964 ************************************ 00:07:54.964 END TEST accel_assign_opcode 00:07:54.964 ************************************ 00:07:54.964 10:00:25 -- accel/accel_rpc.sh@55 -- # killprocess 71494 00:07:54.964 10:00:25 -- common/autotest_common.sh@926 -- # '[' -z 71494 ']' 00:07:54.964 10:00:25 -- common/autotest_common.sh@930 -- # kill -0 71494 00:07:54.964 10:00:25 -- common/autotest_common.sh@931 -- # uname 00:07:54.964 10:00:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:54.964 10:00:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71494 00:07:54.964 10:00:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:54.964 10:00:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:54.964 killing process with pid 71494 00:07:54.964 10:00:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71494' 00:07:54.964 10:00:25 -- common/autotest_common.sh@945 -- # kill 71494 00:07:54.964 10:00:25 -- common/autotest_common.sh@950 -- # wait 71494 00:07:55.530 00:07:55.530 real 0m1.869s 00:07:55.530 user 0m1.986s 00:07:55.530 sys 0m0.442s 00:07:55.530 10:00:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.530 10:00:25 -- common/autotest_common.sh@10 -- # set +x 00:07:55.530 ************************************ 00:07:55.530 END TEST accel_rpc 00:07:55.530 ************************************ 00:07:55.530 10:00:25 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:55.530 10:00:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:55.530 10:00:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:55.530 10:00:25 -- common/autotest_common.sh@10 -- # set +x 00:07:55.530 ************************************ 00:07:55.530 START TEST app_cmdline 00:07:55.530 ************************************ 00:07:55.530 10:00:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:55.530 * Looking for test storage... 00:07:55.530 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:55.530 10:00:25 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:55.530 10:00:25 -- app/cmdline.sh@17 -- # spdk_tgt_pid=71604 00:07:55.530 10:00:25 -- app/cmdline.sh@18 -- # waitforlisten 71604 00:07:55.530 10:00:25 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:55.530 10:00:25 -- common/autotest_common.sh@819 -- # '[' -z 71604 ']' 00:07:55.530 10:00:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:55.530 10:00:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:55.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:55.530 10:00:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:55.530 10:00:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:55.530 10:00:25 -- common/autotest_common.sh@10 -- # set +x 00:07:55.530 [2024-07-14 10:00:26.006153] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:07:55.530 [2024-07-14 10:00:26.006274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71604 ] 00:07:55.787 [2024-07-14 10:00:26.141712] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.787 [2024-07-14 10:00:26.208114] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:55.787 [2024-07-14 10:00:26.208282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.719 10:00:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:56.719 10:00:26 -- common/autotest_common.sh@852 -- # return 0 00:07:56.719 10:00:26 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:56.719 { 00:07:56.719 "fields": { 00:07:56.719 "commit": "4b94202c6", 00:07:56.719 "major": 24, 00:07:56.719 "minor": 1, 00:07:56.719 "patch": 1, 00:07:56.719 "suffix": "-pre" 00:07:56.719 }, 00:07:56.719 "version": "SPDK v24.01.1-pre git sha1 4b94202c6" 00:07:56.719 } 00:07:56.719 10:00:27 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:56.719 10:00:27 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:56.719 10:00:27 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:56.719 10:00:27 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:56.719 10:00:27 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:56.719 10:00:27 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:56.719 10:00:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:56.719 10:00:27 -- common/autotest_common.sh@10 -- # set +x 00:07:56.719 10:00:27 -- app/cmdline.sh@26 -- # sort 00:07:56.719 10:00:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:56.719 10:00:27 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:56.719 10:00:27 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:56.719 10:00:27 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:56.719 10:00:27 -- common/autotest_common.sh@640 -- # local es=0 00:07:56.719 10:00:27 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:56.719 10:00:27 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:56.719 10:00:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:56.719 10:00:27 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:56.719 10:00:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:56.719 10:00:27 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:56.719 10:00:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:56.719 10:00:27 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:56.719 10:00:27 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:56.719 10:00:27 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:56.977 2024/07/14 10:00:27 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:07:56.977 request: 00:07:56.977 { 00:07:56.977 "method": "env_dpdk_get_mem_stats", 00:07:56.977 "params": {} 00:07:56.977 } 00:07:56.977 Got JSON-RPC error response 00:07:56.977 GoRPCClient: error on JSON-RPC call 00:07:57.235 10:00:27 -- common/autotest_common.sh@643 -- # es=1 00:07:57.235 10:00:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:57.235 10:00:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:57.235 10:00:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:57.235 10:00:27 -- app/cmdline.sh@1 -- # killprocess 71604 00:07:57.235 10:00:27 -- common/autotest_common.sh@926 -- # '[' -z 71604 ']' 00:07:57.235 10:00:27 -- common/autotest_common.sh@930 -- # kill -0 71604 00:07:57.235 10:00:27 -- common/autotest_common.sh@931 -- # uname 00:07:57.235 10:00:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:57.235 10:00:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71604 00:07:57.235 10:00:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:57.235 10:00:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:57.235 killing process with pid 71604 00:07:57.235 10:00:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71604' 00:07:57.235 10:00:27 -- common/autotest_common.sh@945 -- # kill 71604 00:07:57.235 10:00:27 -- common/autotest_common.sh@950 -- # wait 71604 00:07:57.493 00:07:57.493 real 0m2.041s 00:07:57.493 user 0m2.514s 00:07:57.493 sys 0m0.503s 00:07:57.493 10:00:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.493 10:00:27 -- common/autotest_common.sh@10 -- # set +x 00:07:57.493 ************************************ 00:07:57.493 END TEST app_cmdline 00:07:57.493 ************************************ 00:07:57.493 10:00:27 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:57.493 10:00:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:57.493 10:00:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:57.493 10:00:27 -- common/autotest_common.sh@10 -- # set +x 00:07:57.493 ************************************ 00:07:57.493 START TEST version 00:07:57.493 ************************************ 00:07:57.493 10:00:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:57.749 * Looking for test storage... 00:07:57.749 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:57.749 10:00:28 -- app/version.sh@17 -- # get_header_version major 00:07:57.749 10:00:28 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:57.749 10:00:28 -- app/version.sh@14 -- # cut -f2 00:07:57.749 10:00:28 -- app/version.sh@14 -- # tr -d '"' 00:07:57.749 10:00:28 -- app/version.sh@17 -- # major=24 00:07:57.749 10:00:28 -- app/version.sh@18 -- # get_header_version minor 00:07:57.749 10:00:28 -- app/version.sh@14 -- # cut -f2 00:07:57.749 10:00:28 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:57.749 10:00:28 -- app/version.sh@14 -- # tr -d '"' 00:07:57.749 10:00:28 -- app/version.sh@18 -- # minor=1 00:07:57.749 10:00:28 -- app/version.sh@19 -- # get_header_version patch 00:07:57.749 10:00:28 -- app/version.sh@14 -- # cut -f2 00:07:57.749 10:00:28 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:57.749 10:00:28 -- app/version.sh@14 -- # tr -d '"' 00:07:57.749 10:00:28 -- app/version.sh@19 -- # patch=1 00:07:57.750 10:00:28 -- app/version.sh@20 -- # get_header_version suffix 00:07:57.750 10:00:28 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:57.750 10:00:28 -- app/version.sh@14 -- # cut -f2 00:07:57.750 10:00:28 -- app/version.sh@14 -- # tr -d '"' 00:07:57.750 10:00:28 -- app/version.sh@20 -- # suffix=-pre 00:07:57.750 10:00:28 -- app/version.sh@22 -- # version=24.1 00:07:57.750 10:00:28 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:57.750 10:00:28 -- app/version.sh@25 -- # version=24.1.1 00:07:57.750 10:00:28 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:57.750 10:00:28 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:57.750 10:00:28 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:57.750 10:00:28 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:57.750 10:00:28 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:57.750 00:07:57.750 real 0m0.153s 00:07:57.750 user 0m0.088s 00:07:57.750 sys 0m0.099s 00:07:57.750 10:00:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.750 10:00:28 -- common/autotest_common.sh@10 -- # set +x 00:07:57.750 ************************************ 00:07:57.750 END TEST version 00:07:57.750 ************************************ 00:07:57.750 10:00:28 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:07:57.750 10:00:28 -- spdk/autotest.sh@204 -- # uname -s 00:07:57.750 10:00:28 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:07:57.750 10:00:28 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:57.750 10:00:28 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:57.750 10:00:28 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:07:57.750 10:00:28 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:07:57.750 10:00:28 -- spdk/autotest.sh@268 -- # timing_exit lib 00:07:57.750 10:00:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:57.750 10:00:28 -- common/autotest_common.sh@10 -- # set +x 00:07:57.750 10:00:28 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:57.750 10:00:28 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:07:57.750 10:00:28 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:07:57.750 10:00:28 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:07:57.750 10:00:28 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:07:57.750 10:00:28 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:07:57.750 10:00:28 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:57.750 10:00:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:57.750 10:00:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:57.750 10:00:28 -- common/autotest_common.sh@10 -- # set +x 00:07:57.750 ************************************ 00:07:57.750 START TEST nvmf_tcp 00:07:57.750 ************************************ 00:07:57.750 10:00:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:58.007 * Looking for test storage... 00:07:58.007 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:07:58.008 10:00:28 -- nvmf/nvmf.sh@10 -- # uname -s 00:07:58.008 10:00:28 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:58.008 10:00:28 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:58.008 10:00:28 -- nvmf/common.sh@7 -- # uname -s 00:07:58.008 10:00:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.008 10:00:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.008 10:00:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.008 10:00:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.008 10:00:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.008 10:00:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.008 10:00:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.008 10:00:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.008 10:00:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.008 10:00:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.008 10:00:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:07:58.008 10:00:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:07:58.008 10:00:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.008 10:00:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.008 10:00:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:58.008 10:00:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:58.008 10:00:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.008 10:00:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.008 10:00:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.008 10:00:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.008 10:00:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.008 10:00:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.008 10:00:28 -- paths/export.sh@5 -- # export PATH 00:07:58.008 10:00:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.008 10:00:28 -- nvmf/common.sh@46 -- # : 0 00:07:58.008 10:00:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:58.008 10:00:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:58.008 10:00:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:58.008 10:00:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.008 10:00:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.008 10:00:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:58.008 10:00:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:58.008 10:00:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:58.008 10:00:28 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:58.008 10:00:28 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:58.008 10:00:28 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:58.008 10:00:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:58.008 10:00:28 -- common/autotest_common.sh@10 -- # set +x 00:07:58.008 10:00:28 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:58.008 10:00:28 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:58.008 10:00:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:58.008 10:00:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:58.008 10:00:28 -- common/autotest_common.sh@10 -- # set +x 00:07:58.008 ************************************ 00:07:58.008 START TEST nvmf_example 00:07:58.008 ************************************ 00:07:58.008 10:00:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:58.008 * Looking for test storage... 00:07:58.008 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:07:58.008 10:00:28 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:07:58.008 10:00:28 -- nvmf/common.sh@7 -- # uname -s 00:07:58.008 10:00:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.008 10:00:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.008 10:00:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.008 10:00:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.008 10:00:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.008 10:00:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.008 10:00:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.008 10:00:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.008 10:00:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.008 10:00:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.008 10:00:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:07:58.008 10:00:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:07:58.008 10:00:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.008 10:00:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.008 10:00:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:07:58.008 10:00:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:58.008 10:00:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.008 10:00:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.008 10:00:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.008 10:00:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.008 10:00:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.008 10:00:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.008 10:00:28 -- paths/export.sh@5 -- # export PATH 00:07:58.008 10:00:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.008 10:00:28 -- nvmf/common.sh@46 -- # : 0 00:07:58.008 10:00:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:07:58.008 10:00:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:07:58.008 10:00:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:07:58.008 10:00:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.008 10:00:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.008 10:00:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:07:58.008 10:00:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:07:58.008 10:00:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:07:58.008 10:00:28 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:58.008 10:00:28 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:58.008 10:00:28 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:58.008 10:00:28 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:58.008 10:00:28 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:58.008 10:00:28 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:58.008 10:00:28 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:58.009 10:00:28 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:58.009 10:00:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:58.009 10:00:28 -- common/autotest_common.sh@10 -- # set +x 00:07:58.009 10:00:28 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:58.009 10:00:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:07:58.009 10:00:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.009 10:00:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:07:58.009 10:00:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:07:58.009 10:00:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:07:58.009 10:00:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.009 10:00:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:58.009 10:00:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.009 10:00:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:07:58.009 10:00:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:07:58.009 10:00:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:07:58.009 10:00:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:07:58.009 10:00:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:07:58.009 10:00:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:07:58.009 10:00:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:58.009 10:00:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:58.009 10:00:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:07:58.009 10:00:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:07:58.009 10:00:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:07:58.009 10:00:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:07:58.009 10:00:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:07:58.009 10:00:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:58.009 10:00:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:07:58.009 10:00:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:07:58.009 10:00:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:07:58.009 10:00:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:07:58.009 10:00:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:07:58.009 Cannot find device "nvmf_init_br" 00:07:58.009 10:00:28 -- nvmf/common.sh@153 -- # true 00:07:58.009 10:00:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:07:58.009 Cannot find device "nvmf_tgt_br" 00:07:58.009 10:00:28 -- nvmf/common.sh@154 -- # true 00:07:58.009 10:00:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:07:58.009 Cannot find device "nvmf_tgt_br2" 00:07:58.009 10:00:28 -- nvmf/common.sh@155 -- # true 00:07:58.009 10:00:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:07:58.009 Cannot find device "nvmf_init_br" 00:07:58.009 10:00:28 -- nvmf/common.sh@156 -- # true 00:07:58.009 10:00:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:07:58.009 Cannot find device "nvmf_tgt_br" 00:07:58.009 10:00:28 -- nvmf/common.sh@157 -- # true 00:07:58.009 10:00:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:07:58.009 Cannot find device "nvmf_tgt_br2" 00:07:58.009 10:00:28 -- nvmf/common.sh@158 -- # true 00:07:58.009 10:00:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:07:58.009 Cannot find device "nvmf_br" 00:07:58.009 10:00:28 -- nvmf/common.sh@159 -- # true 00:07:58.009 10:00:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:07:58.009 Cannot find device "nvmf_init_if" 00:07:58.009 10:00:28 -- nvmf/common.sh@160 -- # true 00:07:58.009 10:00:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:07:58.009 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:58.009 10:00:28 -- nvmf/common.sh@161 -- # true 00:07:58.009 10:00:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:07:58.267 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:07:58.267 10:00:28 -- nvmf/common.sh@162 -- # true 00:07:58.267 10:00:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:07:58.267 10:00:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:07:58.267 10:00:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:07:58.267 10:00:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:07:58.267 10:00:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:07:58.267 10:00:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:07:58.267 10:00:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:07:58.267 10:00:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:07:58.267 10:00:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:07:58.268 10:00:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:07:58.268 10:00:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:07:58.268 10:00:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:07:58.268 10:00:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:07:58.268 10:00:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:07:58.268 10:00:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:07:58.268 10:00:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:07:58.268 10:00:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:07:58.268 10:00:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:07:58.268 10:00:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:07:58.268 10:00:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:07:58.268 10:00:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:07:58.268 10:00:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:07:58.523 10:00:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:07:58.523 10:00:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:07:58.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:58.524 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:07:58.524 00:07:58.524 --- 10.0.0.2 ping statistics --- 00:07:58.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.524 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:07:58.524 10:00:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:07:58.524 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:07:58.524 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.083 ms 00:07:58.524 00:07:58.524 --- 10.0.0.3 ping statistics --- 00:07:58.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.524 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:07:58.524 10:00:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:07:58.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:58.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:07:58.524 00:07:58.524 --- 10.0.0.1 ping statistics --- 00:07:58.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:58.524 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:07:58.524 10:00:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:58.524 10:00:28 -- nvmf/common.sh@421 -- # return 0 00:07:58.524 10:00:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:07:58.524 10:00:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:58.524 10:00:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:07:58.524 10:00:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:07:58.524 10:00:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:58.524 10:00:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:07:58.524 10:00:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:07:58.524 10:00:28 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:58.524 10:00:28 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:58.524 10:00:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:07:58.524 10:00:28 -- common/autotest_common.sh@10 -- # set +x 00:07:58.524 10:00:28 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:58.524 10:00:28 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:58.524 10:00:28 -- target/nvmf_example.sh@34 -- # nvmfpid=71953 00:07:58.524 10:00:28 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:58.524 10:00:28 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:58.524 10:00:28 -- target/nvmf_example.sh@36 -- # waitforlisten 71953 00:07:58.524 10:00:28 -- common/autotest_common.sh@819 -- # '[' -z 71953 ']' 00:07:58.524 10:00:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:58.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:58.524 10:00:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:58.524 10:00:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:58.524 10:00:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:58.524 10:00:28 -- common/autotest_common.sh@10 -- # set +x 00:07:59.456 10:00:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:59.456 10:00:29 -- common/autotest_common.sh@852 -- # return 0 00:07:59.456 10:00:29 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:59.456 10:00:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:07:59.456 10:00:29 -- common/autotest_common.sh@10 -- # set +x 00:07:59.456 10:00:29 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:59.456 10:00:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.456 10:00:29 -- common/autotest_common.sh@10 -- # set +x 00:07:59.713 10:00:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.713 10:00:30 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:59.713 10:00:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.713 10:00:30 -- common/autotest_common.sh@10 -- # set +x 00:07:59.713 10:00:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.713 10:00:30 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:59.713 10:00:30 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:59.713 10:00:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.713 10:00:30 -- common/autotest_common.sh@10 -- # set +x 00:07:59.713 10:00:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.713 10:00:30 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:59.713 10:00:30 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:59.713 10:00:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.713 10:00:30 -- common/autotest_common.sh@10 -- # set +x 00:07:59.713 10:00:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.713 10:00:30 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:59.713 10:00:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:59.713 10:00:30 -- common/autotest_common.sh@10 -- # set +x 00:07:59.713 10:00:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:59.713 10:00:30 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:07:59.713 10:00:30 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:11.946 Initializing NVMe Controllers 00:08:11.946 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:11.946 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:11.946 Initialization complete. Launching workers. 00:08:11.946 ======================================================== 00:08:11.946 Latency(us) 00:08:11.946 Device Information : IOPS MiB/s Average min max 00:08:11.946 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14854.39 58.02 4308.07 672.85 23263.69 00:08:11.946 ======================================================== 00:08:11.946 Total : 14854.39 58.02 4308.07 672.85 23263.69 00:08:11.946 00:08:11.946 10:00:40 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:11.946 10:00:40 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:11.946 10:00:40 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:11.946 10:00:40 -- nvmf/common.sh@116 -- # sync 00:08:11.946 10:00:40 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:11.946 10:00:40 -- nvmf/common.sh@119 -- # set +e 00:08:11.946 10:00:40 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:11.946 10:00:40 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:11.946 rmmod nvme_tcp 00:08:11.946 rmmod nvme_fabrics 00:08:11.946 rmmod nvme_keyring 00:08:11.946 10:00:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:11.946 10:00:40 -- nvmf/common.sh@123 -- # set -e 00:08:11.946 10:00:40 -- nvmf/common.sh@124 -- # return 0 00:08:11.946 10:00:40 -- nvmf/common.sh@477 -- # '[' -n 71953 ']' 00:08:11.946 10:00:40 -- nvmf/common.sh@478 -- # killprocess 71953 00:08:11.946 10:00:40 -- common/autotest_common.sh@926 -- # '[' -z 71953 ']' 00:08:11.946 10:00:40 -- common/autotest_common.sh@930 -- # kill -0 71953 00:08:11.946 10:00:40 -- common/autotest_common.sh@931 -- # uname 00:08:11.946 10:00:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:11.946 10:00:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71953 00:08:11.946 killing process with pid 71953 00:08:11.946 10:00:40 -- common/autotest_common.sh@932 -- # process_name=nvmf 00:08:11.946 10:00:40 -- common/autotest_common.sh@936 -- # '[' nvmf = sudo ']' 00:08:11.946 10:00:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71953' 00:08:11.946 10:00:40 -- common/autotest_common.sh@945 -- # kill 71953 00:08:11.946 10:00:40 -- common/autotest_common.sh@950 -- # wait 71953 00:08:11.946 nvmf threads initialize successfully 00:08:11.946 bdev subsystem init successfully 00:08:11.946 created a nvmf target service 00:08:11.946 create targets's poll groups done 00:08:11.946 all subsystems of target started 00:08:11.946 nvmf target is running 00:08:11.946 all subsystems of target stopped 00:08:11.946 destroy targets's poll groups done 00:08:11.946 destroyed the nvmf target service 00:08:11.946 bdev subsystem finish successfully 00:08:11.946 nvmf threads destroy successfully 00:08:11.946 10:00:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:11.946 10:00:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:11.946 10:00:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:11.946 10:00:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:11.946 10:00:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:11.946 10:00:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.946 10:00:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.946 10:00:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.946 10:00:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:11.946 10:00:40 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:11.946 10:00:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:11.946 10:00:40 -- common/autotest_common.sh@10 -- # set +x 00:08:11.946 00:08:11.946 real 0m12.378s 00:08:11.946 user 0m44.478s 00:08:11.946 sys 0m1.984s 00:08:11.946 ************************************ 00:08:11.946 END TEST nvmf_example 00:08:11.946 ************************************ 00:08:11.946 10:00:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:11.946 10:00:40 -- common/autotest_common.sh@10 -- # set +x 00:08:11.946 10:00:40 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:11.946 10:00:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:11.946 10:00:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:11.946 10:00:40 -- common/autotest_common.sh@10 -- # set +x 00:08:11.946 ************************************ 00:08:11.946 START TEST nvmf_filesystem 00:08:11.946 ************************************ 00:08:11.946 10:00:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:08:11.946 * Looking for test storage... 00:08:11.946 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:11.946 10:00:40 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:08:11.946 10:00:40 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:11.946 10:00:40 -- common/autotest_common.sh@34 -- # set -e 00:08:11.946 10:00:40 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:11.946 10:00:40 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:11.946 10:00:40 -- common/autotest_common.sh@38 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:08:11.946 10:00:40 -- common/autotest_common.sh@39 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:08:11.947 10:00:40 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:11.947 10:00:40 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:11.947 10:00:40 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:11.947 10:00:40 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:11.947 10:00:40 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:08:11.947 10:00:40 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:11.947 10:00:40 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:11.947 10:00:40 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:11.947 10:00:40 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:11.947 10:00:40 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:11.947 10:00:40 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:11.947 10:00:40 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:11.947 10:00:40 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:11.947 10:00:40 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:11.947 10:00:40 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:11.947 10:00:40 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:11.947 10:00:40 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:11.947 10:00:40 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:11.947 10:00:40 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:11.947 10:00:40 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:11.947 10:00:40 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:11.947 10:00:40 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:11.947 10:00:40 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:11.947 10:00:40 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:11.947 10:00:40 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:11.947 10:00:40 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:11.947 10:00:40 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:11.947 10:00:40 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:11.947 10:00:40 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:11.947 10:00:40 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:11.947 10:00:40 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:11.947 10:00:40 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:11.947 10:00:40 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:11.947 10:00:40 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:11.947 10:00:40 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:11.947 10:00:40 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:08:11.947 10:00:40 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:11.947 10:00:40 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:11.947 10:00:40 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:11.947 10:00:40 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:11.947 10:00:40 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:08:11.947 10:00:40 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:11.947 10:00:40 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:11.947 10:00:40 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:11.947 10:00:40 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:11.947 10:00:40 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:11.947 10:00:40 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:11.947 10:00:40 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:11.947 10:00:40 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:11.947 10:00:40 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:11.947 10:00:40 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:08:11.947 10:00:40 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:11.947 10:00:40 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:11.947 10:00:40 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:11.947 10:00:40 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:11.947 10:00:40 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:11.947 10:00:40 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:11.947 10:00:40 -- common/build_config.sh@58 -- # CONFIG_GOLANG=y 00:08:11.947 10:00:40 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:11.947 10:00:40 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:11.947 10:00:40 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:11.947 10:00:40 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:11.947 10:00:40 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:11.947 10:00:40 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:11.947 10:00:40 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:11.947 10:00:40 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:11.947 10:00:40 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:11.947 10:00:40 -- common/build_config.sh@68 -- # CONFIG_AVAHI=y 00:08:11.947 10:00:40 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:11.947 10:00:40 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:11.947 10:00:40 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:11.947 10:00:40 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:11.947 10:00:40 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:11.947 10:00:40 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:11.947 10:00:40 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:11.947 10:00:40 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:11.947 10:00:40 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:11.947 10:00:40 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:11.947 10:00:40 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:11.947 10:00:40 -- common/autotest_common.sh@48 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:11.947 10:00:40 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:08:11.947 10:00:40 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:08:11.947 10:00:40 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:08:11.947 10:00:40 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:08:11.947 10:00:40 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:08:11.947 10:00:40 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:08:11.947 10:00:40 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:08:11.947 10:00:40 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:11.947 10:00:40 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:11.947 10:00:40 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:11.947 10:00:40 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:11.947 10:00:40 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:11.947 10:00:40 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:11.947 10:00:40 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:08:11.947 10:00:40 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:11.947 #define SPDK_CONFIG_H 00:08:11.947 #define SPDK_CONFIG_APPS 1 00:08:11.947 #define SPDK_CONFIG_ARCH native 00:08:11.947 #undef SPDK_CONFIG_ASAN 00:08:11.947 #define SPDK_CONFIG_AVAHI 1 00:08:11.947 #undef SPDK_CONFIG_CET 00:08:11.947 #define SPDK_CONFIG_COVERAGE 1 00:08:11.947 #define SPDK_CONFIG_CROSS_PREFIX 00:08:11.947 #undef SPDK_CONFIG_CRYPTO 00:08:11.947 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:11.947 #undef SPDK_CONFIG_CUSTOMOCF 00:08:11.947 #undef SPDK_CONFIG_DAOS 00:08:11.947 #define SPDK_CONFIG_DAOS_DIR 00:08:11.947 #define SPDK_CONFIG_DEBUG 1 00:08:11.947 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:11.947 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/dpdk/build 00:08:11.947 #define SPDK_CONFIG_DPDK_INC_DIR //home/vagrant/spdk_repo/dpdk/build/include 00:08:11.947 #define SPDK_CONFIG_DPDK_LIB_DIR /home/vagrant/spdk_repo/dpdk/build/lib 00:08:11.947 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:11.947 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:08:11.947 #define SPDK_CONFIG_EXAMPLES 1 00:08:11.947 #undef SPDK_CONFIG_FC 00:08:11.947 #define SPDK_CONFIG_FC_PATH 00:08:11.947 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:11.947 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:11.947 #undef SPDK_CONFIG_FUSE 00:08:11.947 #undef SPDK_CONFIG_FUZZER 00:08:11.947 #define SPDK_CONFIG_FUZZER_LIB 00:08:11.947 #define SPDK_CONFIG_GOLANG 1 00:08:11.947 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:11.947 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:11.947 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:11.947 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:11.947 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:11.947 #define SPDK_CONFIG_IDXD 1 00:08:11.947 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:11.947 #undef SPDK_CONFIG_IPSEC_MB 00:08:11.947 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:11.947 #define SPDK_CONFIG_ISAL 1 00:08:11.948 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:11.948 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:11.948 #define SPDK_CONFIG_LIBDIR 00:08:11.948 #undef SPDK_CONFIG_LTO 00:08:11.948 #define SPDK_CONFIG_MAX_LCORES 00:08:11.948 #define SPDK_CONFIG_NVME_CUSE 1 00:08:11.948 #undef SPDK_CONFIG_OCF 00:08:11.948 #define SPDK_CONFIG_OCF_PATH 00:08:11.948 #define SPDK_CONFIG_OPENSSL_PATH 00:08:11.948 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:11.948 #undef SPDK_CONFIG_PGO_USE 00:08:11.948 #define SPDK_CONFIG_PREFIX /usr/local 00:08:11.948 #undef SPDK_CONFIG_RAID5F 00:08:11.948 #undef SPDK_CONFIG_RBD 00:08:11.948 #define SPDK_CONFIG_RDMA 1 00:08:11.948 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:11.948 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:11.948 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:11.948 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:11.948 #define SPDK_CONFIG_SHARED 1 00:08:11.948 #undef SPDK_CONFIG_SMA 00:08:11.948 #define SPDK_CONFIG_TESTS 1 00:08:11.948 #undef SPDK_CONFIG_TSAN 00:08:11.948 #define SPDK_CONFIG_UBLK 1 00:08:11.948 #define SPDK_CONFIG_UBSAN 1 00:08:11.948 #undef SPDK_CONFIG_UNIT_TESTS 00:08:11.948 #undef SPDK_CONFIG_URING 00:08:11.948 #define SPDK_CONFIG_URING_PATH 00:08:11.948 #undef SPDK_CONFIG_URING_ZNS 00:08:11.948 #define SPDK_CONFIG_USDT 1 00:08:11.948 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:11.948 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:11.948 #undef SPDK_CONFIG_VFIO_USER 00:08:11.948 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:11.948 #define SPDK_CONFIG_VHOST 1 00:08:11.948 #define SPDK_CONFIG_VIRTIO 1 00:08:11.948 #undef SPDK_CONFIG_VTUNE 00:08:11.948 #define SPDK_CONFIG_VTUNE_DIR 00:08:11.948 #define SPDK_CONFIG_WERROR 1 00:08:11.948 #define SPDK_CONFIG_WPDK_DIR 00:08:11.948 #undef SPDK_CONFIG_XNVME 00:08:11.948 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:11.948 10:00:40 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:11.948 10:00:40 -- common/autotest_common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:11.948 10:00:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.948 10:00:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.948 10:00:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.948 10:00:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.948 10:00:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.948 10:00:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.948 10:00:40 -- paths/export.sh@5 -- # export PATH 00:08:11.948 10:00:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.948 10:00:40 -- common/autotest_common.sh@50 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:11.948 10:00:40 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:08:11.948 10:00:40 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:11.948 10:00:40 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:08:11.948 10:00:40 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:08:11.948 10:00:40 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:08:11.948 10:00:40 -- pm/common@16 -- # TEST_TAG=N/A 00:08:11.948 10:00:40 -- pm/common@17 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:08:11.948 10:00:40 -- common/autotest_common.sh@52 -- # : 1 00:08:11.948 10:00:40 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:11.948 10:00:40 -- common/autotest_common.sh@56 -- # : 0 00:08:11.948 10:00:40 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:11.948 10:00:40 -- common/autotest_common.sh@58 -- # : 0 00:08:11.948 10:00:40 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:11.948 10:00:40 -- common/autotest_common.sh@60 -- # : 1 00:08:11.948 10:00:40 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:11.948 10:00:40 -- common/autotest_common.sh@62 -- # : 0 00:08:11.948 10:00:40 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:11.948 10:00:40 -- common/autotest_common.sh@64 -- # : 00:08:11.948 10:00:40 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:11.948 10:00:40 -- common/autotest_common.sh@66 -- # : 0 00:08:11.948 10:00:40 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:11.948 10:00:40 -- common/autotest_common.sh@68 -- # : 0 00:08:11.948 10:00:40 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:11.948 10:00:40 -- common/autotest_common.sh@70 -- # : 0 00:08:11.948 10:00:40 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:11.948 10:00:40 -- common/autotest_common.sh@72 -- # : 0 00:08:11.948 10:00:40 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:11.948 10:00:40 -- common/autotest_common.sh@74 -- # : 0 00:08:11.948 10:00:40 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:11.948 10:00:40 -- common/autotest_common.sh@76 -- # : 0 00:08:11.948 10:00:40 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:11.948 10:00:40 -- common/autotest_common.sh@78 -- # : 0 00:08:11.948 10:00:40 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:11.948 10:00:40 -- common/autotest_common.sh@80 -- # : 0 00:08:11.948 10:00:40 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:11.948 10:00:40 -- common/autotest_common.sh@82 -- # : 0 00:08:11.948 10:00:40 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:11.948 10:00:40 -- common/autotest_common.sh@84 -- # : 0 00:08:11.948 10:00:40 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:11.948 10:00:40 -- common/autotest_common.sh@86 -- # : 1 00:08:11.948 10:00:40 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:11.948 10:00:40 -- common/autotest_common.sh@88 -- # : 0 00:08:11.948 10:00:40 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:11.948 10:00:40 -- common/autotest_common.sh@90 -- # : 0 00:08:11.948 10:00:40 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:11.948 10:00:40 -- common/autotest_common.sh@92 -- # : 0 00:08:11.948 10:00:40 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:11.948 10:00:40 -- common/autotest_common.sh@94 -- # : 0 00:08:11.948 10:00:40 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:11.948 10:00:40 -- common/autotest_common.sh@96 -- # : tcp 00:08:11.948 10:00:40 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:11.949 10:00:40 -- common/autotest_common.sh@98 -- # : 0 00:08:11.949 10:00:40 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:11.949 10:00:40 -- common/autotest_common.sh@100 -- # : 0 00:08:11.949 10:00:40 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:11.949 10:00:40 -- common/autotest_common.sh@102 -- # : 0 00:08:11.949 10:00:40 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:11.949 10:00:40 -- common/autotest_common.sh@104 -- # : 0 00:08:11.949 10:00:40 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:11.949 10:00:40 -- common/autotest_common.sh@106 -- # : 0 00:08:11.949 10:00:40 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:11.949 10:00:40 -- common/autotest_common.sh@108 -- # : 0 00:08:11.949 10:00:40 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:11.949 10:00:40 -- common/autotest_common.sh@110 -- # : 0 00:08:11.949 10:00:40 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:11.949 10:00:40 -- common/autotest_common.sh@112 -- # : 0 00:08:11.949 10:00:40 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:11.949 10:00:40 -- common/autotest_common.sh@114 -- # : 0 00:08:11.949 10:00:40 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:11.949 10:00:40 -- common/autotest_common.sh@116 -- # : 1 00:08:11.949 10:00:40 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:11.949 10:00:40 -- common/autotest_common.sh@118 -- # : /home/vagrant/spdk_repo/dpdk/build 00:08:11.949 10:00:40 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:11.949 10:00:40 -- common/autotest_common.sh@120 -- # : 0 00:08:11.949 10:00:40 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:11.949 10:00:40 -- common/autotest_common.sh@122 -- # : 0 00:08:11.949 10:00:40 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:11.949 10:00:40 -- common/autotest_common.sh@124 -- # : 0 00:08:11.949 10:00:40 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:11.949 10:00:40 -- common/autotest_common.sh@126 -- # : 0 00:08:11.949 10:00:40 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:11.949 10:00:40 -- common/autotest_common.sh@128 -- # : 0 00:08:11.949 10:00:40 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:11.949 10:00:40 -- common/autotest_common.sh@130 -- # : 0 00:08:11.949 10:00:40 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:11.949 10:00:40 -- common/autotest_common.sh@132 -- # : v23.11 00:08:11.949 10:00:40 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:11.949 10:00:40 -- common/autotest_common.sh@134 -- # : true 00:08:11.949 10:00:40 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:11.949 10:00:40 -- common/autotest_common.sh@136 -- # : 0 00:08:11.949 10:00:40 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:11.949 10:00:40 -- common/autotest_common.sh@138 -- # : 0 00:08:11.949 10:00:40 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:11.949 10:00:40 -- common/autotest_common.sh@140 -- # : 1 00:08:11.949 10:00:40 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:11.949 10:00:40 -- common/autotest_common.sh@142 -- # : 0 00:08:11.949 10:00:40 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:11.949 10:00:40 -- common/autotest_common.sh@144 -- # : 0 00:08:11.949 10:00:40 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:11.949 10:00:40 -- common/autotest_common.sh@146 -- # : 0 00:08:11.949 10:00:40 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:11.949 10:00:40 -- common/autotest_common.sh@148 -- # : 00:08:11.949 10:00:40 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:11.949 10:00:40 -- common/autotest_common.sh@150 -- # : 0 00:08:11.949 10:00:40 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:11.949 10:00:40 -- common/autotest_common.sh@152 -- # : 0 00:08:11.949 10:00:40 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:11.949 10:00:40 -- common/autotest_common.sh@154 -- # : 0 00:08:11.949 10:00:40 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:11.949 10:00:40 -- common/autotest_common.sh@156 -- # : 0 00:08:11.949 10:00:40 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:11.949 10:00:40 -- common/autotest_common.sh@158 -- # : 0 00:08:11.949 10:00:40 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:11.949 10:00:40 -- common/autotest_common.sh@160 -- # : 0 00:08:11.949 10:00:40 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:11.949 10:00:40 -- common/autotest_common.sh@163 -- # : 00:08:11.949 10:00:40 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:11.949 10:00:40 -- common/autotest_common.sh@165 -- # : 1 00:08:11.949 10:00:40 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:11.949 10:00:40 -- common/autotest_common.sh@167 -- # : 1 00:08:11.949 10:00:40 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:11.949 10:00:40 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:11.949 10:00:40 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:08:11.949 10:00:40 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:11.949 10:00:40 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:08:11.949 10:00:40 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:11.949 10:00:40 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:11.949 10:00:40 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:11.949 10:00:40 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:08:11.949 10:00:40 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:11.949 10:00:40 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:11.949 10:00:40 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:11.949 10:00:40 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:11.949 10:00:40 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:11.949 10:00:40 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:11.949 10:00:40 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:11.949 10:00:40 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:11.949 10:00:40 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:11.949 10:00:40 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:11.949 10:00:40 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:11.949 10:00:40 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:11.949 10:00:40 -- common/autotest_common.sh@196 -- # cat 00:08:11.949 10:00:40 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:11.950 10:00:40 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:11.950 10:00:40 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:11.950 10:00:40 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:11.950 10:00:40 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:11.950 10:00:40 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:11.950 10:00:40 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:11.950 10:00:40 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:11.950 10:00:40 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:08:11.950 10:00:40 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:11.950 10:00:40 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:08:11.950 10:00:40 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:11.950 10:00:40 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:11.950 10:00:40 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:11.950 10:00:40 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:11.950 10:00:40 -- common/autotest_common.sh@242 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:11.950 10:00:40 -- common/autotest_common.sh@242 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:08:11.950 10:00:40 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:11.950 10:00:40 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:11.950 10:00:40 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:08:11.950 10:00:40 -- common/autotest_common.sh@249 -- # export valgrind= 00:08:11.950 10:00:40 -- common/autotest_common.sh@249 -- # valgrind= 00:08:11.950 10:00:40 -- common/autotest_common.sh@255 -- # uname -s 00:08:11.950 10:00:40 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:08:11.950 10:00:40 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:08:11.950 10:00:40 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:08:11.950 10:00:40 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:08:11.950 10:00:40 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:11.950 10:00:40 -- common/autotest_common.sh@258 -- # [[ 0 -eq 1 ]] 00:08:11.950 10:00:40 -- common/autotest_common.sh@265 -- # MAKE=make 00:08:11.950 10:00:40 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j10 00:08:11.950 10:00:40 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:08:11.950 10:00:40 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:08:11.950 10:00:40 -- common/autotest_common.sh@284 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:08:11.950 10:00:40 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:08:11.950 10:00:40 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:08:11.950 10:00:40 -- common/autotest_common.sh@291 -- # for i in "$@" 00:08:11.950 10:00:40 -- common/autotest_common.sh@292 -- # case "$i" in 00:08:11.950 10:00:40 -- common/autotest_common.sh@297 -- # TEST_TRANSPORT=tcp 00:08:11.950 10:00:40 -- common/autotest_common.sh@309 -- # [[ -z 72197 ]] 00:08:11.950 10:00:40 -- common/autotest_common.sh@309 -- # kill -0 72197 00:08:11.950 10:00:40 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:08:11.950 10:00:40 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:08:11.950 10:00:40 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:08:11.950 10:00:40 -- common/autotest_common.sh@322 -- # local mount target_dir 00:08:11.950 10:00:40 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:08:11.950 10:00:40 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:08:11.950 10:00:40 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:08:11.950 10:00:40 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:08:11.950 10:00:40 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.OU2wKA 00:08:11.950 10:00:40 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:11.950 10:00:40 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:08:11.950 10:00:40 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:08:11.950 10:00:40 -- common/autotest_common.sh@346 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.OU2wKA/tests/target /tmp/spdk.OU2wKA 00:08:11.950 10:00:40 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:08:11.950 10:00:40 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:11.950 10:00:40 -- common/autotest_common.sh@318 -- # df -T 00:08:11.950 10:00:40 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:08:11.950 10:00:40 -- common/autotest_common.sh@352 -- # mounts["$mount"]=devtmpfs 00:08:11.950 10:00:40 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:08:11.950 10:00:40 -- common/autotest_common.sh@353 -- # avails["$mount"]=4194304 00:08:11.950 10:00:40 -- common/autotest_common.sh@353 -- # sizes["$mount"]=4194304 00:08:11.950 10:00:40 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:08:11.950 10:00:40 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:11.950 10:00:40 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:11.950 10:00:40 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:11.950 10:00:40 -- common/autotest_common.sh@353 -- # avails["$mount"]=6266634240 00:08:11.950 10:00:40 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6267891712 00:08:11.950 10:00:40 -- common/autotest_common.sh@354 -- # uses["$mount"]=1257472 00:08:11.950 10:00:40 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:11.950 10:00:40 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:11.950 10:00:40 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:11.950 10:00:40 -- common/autotest_common.sh@353 -- # avails["$mount"]=2494353408 00:08:11.950 10:00:40 -- common/autotest_common.sh@353 -- # sizes["$mount"]=2507157504 00:08:11.950 10:00:40 -- common/autotest_common.sh@354 -- # uses["$mount"]=12804096 00:08:11.950 10:00:40 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:11.950 10:00:40 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda5 00:08:11.950 10:00:40 -- common/autotest_common.sh@352 -- # fss["$mount"]=btrfs 00:08:11.950 10:00:40 -- common/autotest_common.sh@353 -- # avails["$mount"]=11997990912 00:08:11.950 10:00:40 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20314062848 00:08:11.950 10:00:40 -- common/autotest_common.sh@354 -- # uses["$mount"]=5975945216 00:08:11.950 10:00:40 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:11.950 10:00:40 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda5 00:08:11.950 10:00:40 -- common/autotest_common.sh@352 -- # fss["$mount"]=btrfs 00:08:11.950 10:00:40 -- common/autotest_common.sh@353 -- # avails["$mount"]=11997990912 00:08:11.950 10:00:40 -- common/autotest_common.sh@353 -- # sizes["$mount"]=20314062848 00:08:11.950 10:00:40 -- common/autotest_common.sh@354 -- # uses["$mount"]=5975945216 00:08:11.950 10:00:40 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:11.950 10:00:40 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:11.950 10:00:40 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:11.950 10:00:40 -- common/autotest_common.sh@353 -- # avails["$mount"]=6267752448 00:08:11.950 10:00:40 -- common/autotest_common.sh@353 -- # sizes["$mount"]=6267891712 00:08:11.950 10:00:40 -- common/autotest_common.sh@354 -- # uses["$mount"]=139264 00:08:11.950 10:00:40 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:11.950 10:00:40 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda2 00:08:11.950 10:00:40 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext4 00:08:11.950 10:00:40 -- common/autotest_common.sh@353 -- # avails["$mount"]=843546624 00:08:11.950 10:00:40 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1012768768 00:08:11.950 10:00:40 -- common/autotest_common.sh@354 -- # uses["$mount"]=100016128 00:08:11.950 10:00:40 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:11.950 10:00:40 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/vda3 00:08:11.950 10:00:40 -- common/autotest_common.sh@352 -- # fss["$mount"]=vfat 00:08:11.950 10:00:40 -- common/autotest_common.sh@353 -- # avails["$mount"]=92499968 00:08:11.950 10:00:40 -- common/autotest_common.sh@353 -- # sizes["$mount"]=104607744 00:08:11.950 10:00:40 -- common/autotest_common.sh@354 -- # uses["$mount"]=12107776 00:08:11.951 10:00:40 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:11.951 10:00:40 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:08:11.951 10:00:40 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:08:11.951 10:00:40 -- common/autotest_common.sh@353 -- # avails["$mount"]=1253572608 00:08:11.951 10:00:40 -- common/autotest_common.sh@353 -- # sizes["$mount"]=1253576704 00:08:11.951 10:00:40 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:08:11.951 10:00:40 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:11.951 10:00:40 -- common/autotest_common.sh@352 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:08:11.951 10:00:40 -- common/autotest_common.sh@352 -- # fss["$mount"]=fuse.sshfs 00:08:11.951 10:00:40 -- common/autotest_common.sh@353 -- # avails["$mount"]=95494582272 00:08:11.951 10:00:40 -- common/autotest_common.sh@353 -- # sizes["$mount"]=105088212992 00:08:11.951 10:00:40 -- common/autotest_common.sh@354 -- # uses["$mount"]=4208197632 00:08:11.951 10:00:40 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:08:11.951 10:00:40 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:08:11.951 * Looking for test storage... 00:08:11.951 10:00:40 -- common/autotest_common.sh@359 -- # local target_space new_size 00:08:11.951 10:00:40 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:08:11.951 10:00:40 -- common/autotest_common.sh@363 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:11.951 10:00:40 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:11.951 10:00:40 -- common/autotest_common.sh@363 -- # mount=/home 00:08:11.951 10:00:40 -- common/autotest_common.sh@365 -- # target_space=11997990912 00:08:11.951 10:00:40 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:08:11.951 10:00:40 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:08:11.951 10:00:40 -- common/autotest_common.sh@371 -- # [[ btrfs == tmpfs ]] 00:08:11.951 10:00:40 -- common/autotest_common.sh@371 -- # [[ btrfs == ramfs ]] 00:08:11.951 10:00:40 -- common/autotest_common.sh@371 -- # [[ /home == / ]] 00:08:11.951 10:00:40 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:11.951 10:00:40 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:11.951 10:00:40 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:11.951 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:11.951 10:00:40 -- common/autotest_common.sh@380 -- # return 0 00:08:11.951 10:00:40 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:08:11.951 10:00:40 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:08:11.951 10:00:40 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:11.951 10:00:40 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:11.951 10:00:40 -- common/autotest_common.sh@1672 -- # true 00:08:11.951 10:00:40 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:08:11.951 10:00:40 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:11.951 10:00:40 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:11.951 10:00:40 -- common/autotest_common.sh@27 -- # exec 00:08:11.951 10:00:40 -- common/autotest_common.sh@29 -- # exec 00:08:11.951 10:00:40 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:11.951 10:00:40 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:11.951 10:00:40 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:11.951 10:00:40 -- common/autotest_common.sh@18 -- # set -x 00:08:11.951 10:00:40 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:11.951 10:00:40 -- nvmf/common.sh@7 -- # uname -s 00:08:11.951 10:00:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.951 10:00:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.951 10:00:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.951 10:00:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.951 10:00:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:11.951 10:00:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:11.951 10:00:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.951 10:00:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:11.951 10:00:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.951 10:00:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:11.951 10:00:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:08:11.951 10:00:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:08:11.951 10:00:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.951 10:00:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:11.951 10:00:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:11.951 10:00:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:11.951 10:00:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.951 10:00:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.951 10:00:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.951 10:00:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.951 10:00:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.951 10:00:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.951 10:00:40 -- paths/export.sh@5 -- # export PATH 00:08:11.951 10:00:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.951 10:00:40 -- nvmf/common.sh@46 -- # : 0 00:08:11.951 10:00:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:11.951 10:00:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:11.951 10:00:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:11.951 10:00:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.951 10:00:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.951 10:00:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:11.951 10:00:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:11.951 10:00:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:11.951 10:00:40 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:11.951 10:00:40 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:11.951 10:00:40 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:11.951 10:00:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:11.952 10:00:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.952 10:00:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:11.952 10:00:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:11.952 10:00:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:11.952 10:00:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:11.952 10:00:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:11.952 10:00:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:11.952 10:00:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:11.952 10:00:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:11.952 10:00:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:11.952 10:00:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:11.952 10:00:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:11.952 10:00:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:11.952 10:00:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:11.952 10:00:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:11.952 10:00:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:11.952 10:00:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:11.952 10:00:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:11.952 10:00:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:11.952 10:00:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:11.952 10:00:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:11.952 10:00:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:11.952 10:00:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:11.952 10:00:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:11.952 10:00:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:11.952 10:00:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:11.952 10:00:41 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:11.952 Cannot find device "nvmf_tgt_br" 00:08:11.952 10:00:41 -- nvmf/common.sh@154 -- # true 00:08:11.952 10:00:41 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:11.952 Cannot find device "nvmf_tgt_br2" 00:08:11.952 10:00:41 -- nvmf/common.sh@155 -- # true 00:08:11.952 10:00:41 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:11.952 10:00:41 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:11.952 Cannot find device "nvmf_tgt_br" 00:08:11.952 10:00:41 -- nvmf/common.sh@157 -- # true 00:08:11.952 10:00:41 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:11.952 Cannot find device "nvmf_tgt_br2" 00:08:11.952 10:00:41 -- nvmf/common.sh@158 -- # true 00:08:11.952 10:00:41 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:11.952 10:00:41 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:11.952 10:00:41 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:11.952 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:11.952 10:00:41 -- nvmf/common.sh@161 -- # true 00:08:11.952 10:00:41 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:11.952 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:11.952 10:00:41 -- nvmf/common.sh@162 -- # true 00:08:11.952 10:00:41 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:11.952 10:00:41 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:11.952 10:00:41 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:11.952 10:00:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:11.952 10:00:41 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:11.952 10:00:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:11.952 10:00:41 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:11.952 10:00:41 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:11.952 10:00:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:11.952 10:00:41 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:11.952 10:00:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:11.952 10:00:41 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:11.952 10:00:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:11.952 10:00:41 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:11.952 10:00:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:11.952 10:00:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:11.952 10:00:41 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:11.952 10:00:41 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:11.952 10:00:41 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:11.952 10:00:41 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:11.952 10:00:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:11.952 10:00:41 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:11.952 10:00:41 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:11.952 10:00:41 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:11.952 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:11.952 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:08:11.952 00:08:11.952 --- 10.0.0.2 ping statistics --- 00:08:11.952 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.952 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:08:11.952 10:00:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:11.952 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:11.952 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:08:11.952 00:08:11.952 --- 10.0.0.3 ping statistics --- 00:08:11.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.953 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:08:11.953 10:00:41 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:11.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:11.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:08:11.953 00:08:11.953 --- 10.0.0.1 ping statistics --- 00:08:11.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.953 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:08:11.953 10:00:41 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:11.953 10:00:41 -- nvmf/common.sh@421 -- # return 0 00:08:11.953 10:00:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:11.953 10:00:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:11.953 10:00:41 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:11.953 10:00:41 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:11.953 10:00:41 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:11.953 10:00:41 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:11.953 10:00:41 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:11.953 10:00:41 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:11.953 10:00:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:11.953 10:00:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:11.953 10:00:41 -- common/autotest_common.sh@10 -- # set +x 00:08:11.953 ************************************ 00:08:11.953 START TEST nvmf_filesystem_no_in_capsule 00:08:11.953 ************************************ 00:08:11.953 10:00:41 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 0 00:08:11.953 10:00:41 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:11.953 10:00:41 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:11.953 10:00:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:11.953 10:00:41 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:11.953 10:00:41 -- common/autotest_common.sh@10 -- # set +x 00:08:11.953 10:00:41 -- nvmf/common.sh@469 -- # nvmfpid=72363 00:08:11.953 10:00:41 -- nvmf/common.sh@470 -- # waitforlisten 72363 00:08:11.953 10:00:41 -- common/autotest_common.sh@819 -- # '[' -z 72363 ']' 00:08:11.953 10:00:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.953 10:00:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:11.953 10:00:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.953 10:00:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:11.953 10:00:41 -- common/autotest_common.sh@10 -- # set +x 00:08:11.953 10:00:41 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:11.953 [2024-07-14 10:00:41.427315] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:11.953 [2024-07-14 10:00:41.427391] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.953 [2024-07-14 10:00:41.563230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:11.953 [2024-07-14 10:00:41.660702] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:11.953 [2024-07-14 10:00:41.661126] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:11.953 [2024-07-14 10:00:41.661204] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:11.953 [2024-07-14 10:00:41.661317] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:11.953 [2024-07-14 10:00:41.661558] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:11.953 [2024-07-14 10:00:41.661664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:11.953 [2024-07-14 10:00:41.661828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:11.953 [2024-07-14 10:00:41.661920] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.953 10:00:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:11.953 10:00:42 -- common/autotest_common.sh@852 -- # return 0 00:08:11.953 10:00:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:11.953 10:00:42 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:11.953 10:00:42 -- common/autotest_common.sh@10 -- # set +x 00:08:11.953 10:00:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:11.953 10:00:42 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:11.953 10:00:42 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:11.953 10:00:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:11.953 10:00:42 -- common/autotest_common.sh@10 -- # set +x 00:08:11.953 [2024-07-14 10:00:42.420025] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.953 10:00:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:11.953 10:00:42 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:11.953 10:00:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:11.953 10:00:42 -- common/autotest_common.sh@10 -- # set +x 00:08:12.211 Malloc1 00:08:12.211 10:00:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.211 10:00:42 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:12.211 10:00:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.212 10:00:42 -- common/autotest_common.sh@10 -- # set +x 00:08:12.212 10:00:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.212 10:00:42 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:12.212 10:00:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.212 10:00:42 -- common/autotest_common.sh@10 -- # set +x 00:08:12.212 10:00:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.212 10:00:42 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:12.212 10:00:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.212 10:00:42 -- common/autotest_common.sh@10 -- # set +x 00:08:12.212 [2024-07-14 10:00:42.626325] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.212 10:00:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.212 10:00:42 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:12.212 10:00:42 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:12.212 10:00:42 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:12.212 10:00:42 -- common/autotest_common.sh@1359 -- # local bs 00:08:12.212 10:00:42 -- common/autotest_common.sh@1360 -- # local nb 00:08:12.212 10:00:42 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:12.212 10:00:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:12.212 10:00:42 -- common/autotest_common.sh@10 -- # set +x 00:08:12.212 10:00:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:12.212 10:00:42 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:12.212 { 00:08:12.212 "aliases": [ 00:08:12.212 "c80f3aee-6a8e-4015-be57-863b2d6d0096" 00:08:12.212 ], 00:08:12.212 "assigned_rate_limits": { 00:08:12.212 "r_mbytes_per_sec": 0, 00:08:12.212 "rw_ios_per_sec": 0, 00:08:12.212 "rw_mbytes_per_sec": 0, 00:08:12.212 "w_mbytes_per_sec": 0 00:08:12.212 }, 00:08:12.212 "block_size": 512, 00:08:12.212 "claim_type": "exclusive_write", 00:08:12.212 "claimed": true, 00:08:12.212 "driver_specific": {}, 00:08:12.212 "memory_domains": [ 00:08:12.212 { 00:08:12.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:12.212 "dma_device_type": 2 00:08:12.212 } 00:08:12.212 ], 00:08:12.212 "name": "Malloc1", 00:08:12.212 "num_blocks": 1048576, 00:08:12.212 "product_name": "Malloc disk", 00:08:12.212 "supported_io_types": { 00:08:12.212 "abort": true, 00:08:12.212 "compare": false, 00:08:12.212 "compare_and_write": false, 00:08:12.212 "flush": true, 00:08:12.212 "nvme_admin": false, 00:08:12.212 "nvme_io": false, 00:08:12.212 "read": true, 00:08:12.212 "reset": true, 00:08:12.212 "unmap": true, 00:08:12.212 "write": true, 00:08:12.212 "write_zeroes": true 00:08:12.212 }, 00:08:12.212 "uuid": "c80f3aee-6a8e-4015-be57-863b2d6d0096", 00:08:12.212 "zoned": false 00:08:12.212 } 00:08:12.212 ]' 00:08:12.212 10:00:42 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:12.212 10:00:42 -- common/autotest_common.sh@1362 -- # bs=512 00:08:12.212 10:00:42 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:12.212 10:00:42 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:12.212 10:00:42 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:12.212 10:00:42 -- common/autotest_common.sh@1367 -- # echo 512 00:08:12.212 10:00:42 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:12.212 10:00:42 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:12.470 10:00:42 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:12.470 10:00:42 -- common/autotest_common.sh@1177 -- # local i=0 00:08:12.470 10:00:42 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:12.470 10:00:42 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:12.470 10:00:42 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:14.397 10:00:44 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:14.397 10:00:44 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:14.397 10:00:44 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:14.655 10:00:44 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:14.655 10:00:44 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:14.655 10:00:44 -- common/autotest_common.sh@1187 -- # return 0 00:08:14.655 10:00:44 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:14.655 10:00:44 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:14.655 10:00:44 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:14.655 10:00:44 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:14.655 10:00:44 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:14.655 10:00:44 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:14.655 10:00:44 -- setup/common.sh@80 -- # echo 536870912 00:08:14.655 10:00:44 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:14.655 10:00:44 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:14.655 10:00:44 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:14.655 10:00:44 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:14.655 10:00:45 -- target/filesystem.sh@69 -- # partprobe 00:08:14.655 10:00:45 -- target/filesystem.sh@70 -- # sleep 1 00:08:15.588 10:00:46 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:08:15.588 10:00:46 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:15.588 10:00:46 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:15.588 10:00:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:15.588 10:00:46 -- common/autotest_common.sh@10 -- # set +x 00:08:15.588 ************************************ 00:08:15.588 START TEST filesystem_ext4 00:08:15.588 ************************************ 00:08:15.846 10:00:46 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:15.846 10:00:46 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:15.846 10:00:46 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:15.846 10:00:46 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:15.846 10:00:46 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:15.846 10:00:46 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:15.846 10:00:46 -- common/autotest_common.sh@904 -- # local i=0 00:08:15.846 10:00:46 -- common/autotest_common.sh@905 -- # local force 00:08:15.846 10:00:46 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:15.846 10:00:46 -- common/autotest_common.sh@908 -- # force=-F 00:08:15.846 10:00:46 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:15.846 mke2fs 1.46.5 (30-Dec-2021) 00:08:15.846 Discarding device blocks: 0/522240 done 00:08:15.846 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:15.846 Filesystem UUID: 22abebf2-e23d-4b07-a76c-e49e8801ba33 00:08:15.846 Superblock backups stored on blocks: 00:08:15.846 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:15.846 00:08:15.846 Allocating group tables: 0/64 done 00:08:15.846 Writing inode tables: 0/64 done 00:08:15.846 Creating journal (8192 blocks): done 00:08:15.846 Writing superblocks and filesystem accounting information: 0/64 done 00:08:15.846 00:08:15.846 10:00:46 -- common/autotest_common.sh@921 -- # return 0 00:08:15.846 10:00:46 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:15.846 10:00:46 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:16.104 10:00:46 -- target/filesystem.sh@25 -- # sync 00:08:16.104 10:00:46 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:16.104 10:00:46 -- target/filesystem.sh@27 -- # sync 00:08:16.104 10:00:46 -- target/filesystem.sh@29 -- # i=0 00:08:16.104 10:00:46 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:16.104 10:00:46 -- target/filesystem.sh@37 -- # kill -0 72363 00:08:16.104 10:00:46 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:16.104 10:00:46 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:16.104 10:00:46 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:16.104 10:00:46 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:16.104 00:08:16.104 real 0m0.416s 00:08:16.104 user 0m0.024s 00:08:16.104 sys 0m0.057s 00:08:16.104 10:00:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.104 ************************************ 00:08:16.104 END TEST filesystem_ext4 00:08:16.104 ************************************ 00:08:16.104 10:00:46 -- common/autotest_common.sh@10 -- # set +x 00:08:16.104 10:00:46 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:16.104 10:00:46 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:16.104 10:00:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:16.104 10:00:46 -- common/autotest_common.sh@10 -- # set +x 00:08:16.104 ************************************ 00:08:16.104 START TEST filesystem_btrfs 00:08:16.104 ************************************ 00:08:16.104 10:00:46 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:16.104 10:00:46 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:16.104 10:00:46 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:16.104 10:00:46 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:16.104 10:00:46 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:16.104 10:00:46 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:16.104 10:00:46 -- common/autotest_common.sh@904 -- # local i=0 00:08:16.104 10:00:46 -- common/autotest_common.sh@905 -- # local force 00:08:16.104 10:00:46 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:16.104 10:00:46 -- common/autotest_common.sh@910 -- # force=-f 00:08:16.104 10:00:46 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:16.362 btrfs-progs v6.6.2 00:08:16.362 See https://btrfs.readthedocs.io for more information. 00:08:16.362 00:08:16.362 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:16.362 NOTE: several default settings have changed in version 5.15, please make sure 00:08:16.362 this does not affect your deployments: 00:08:16.362 - DUP for metadata (-m dup) 00:08:16.362 - enabled no-holes (-O no-holes) 00:08:16.362 - enabled free-space-tree (-R free-space-tree) 00:08:16.362 00:08:16.362 Label: (null) 00:08:16.362 UUID: 30b7ca43-06d1-43a5-9fa0-507aa3ed5a45 00:08:16.362 Node size: 16384 00:08:16.362 Sector size: 4096 00:08:16.362 Filesystem size: 510.00MiB 00:08:16.362 Block group profiles: 00:08:16.362 Data: single 8.00MiB 00:08:16.362 Metadata: DUP 32.00MiB 00:08:16.362 System: DUP 8.00MiB 00:08:16.362 SSD detected: yes 00:08:16.362 Zoned device: no 00:08:16.362 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:16.362 Runtime features: free-space-tree 00:08:16.362 Checksum: crc32c 00:08:16.362 Number of devices: 1 00:08:16.362 Devices: 00:08:16.362 ID SIZE PATH 00:08:16.362 1 510.00MiB /dev/nvme0n1p1 00:08:16.362 00:08:16.362 10:00:46 -- common/autotest_common.sh@921 -- # return 0 00:08:16.362 10:00:46 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:16.362 10:00:46 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:16.362 10:00:46 -- target/filesystem.sh@25 -- # sync 00:08:16.362 10:00:46 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:16.362 10:00:46 -- target/filesystem.sh@27 -- # sync 00:08:16.362 10:00:46 -- target/filesystem.sh@29 -- # i=0 00:08:16.362 10:00:46 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:16.362 10:00:46 -- target/filesystem.sh@37 -- # kill -0 72363 00:08:16.362 10:00:46 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:16.362 10:00:46 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:16.362 10:00:46 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:16.362 10:00:46 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:16.362 00:08:16.362 real 0m0.277s 00:08:16.362 user 0m0.016s 00:08:16.362 sys 0m0.073s 00:08:16.362 10:00:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.362 10:00:46 -- common/autotest_common.sh@10 -- # set +x 00:08:16.362 ************************************ 00:08:16.362 END TEST filesystem_btrfs 00:08:16.362 ************************************ 00:08:16.620 10:00:46 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:08:16.620 10:00:46 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:16.620 10:00:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:16.620 10:00:46 -- common/autotest_common.sh@10 -- # set +x 00:08:16.620 ************************************ 00:08:16.620 START TEST filesystem_xfs 00:08:16.620 ************************************ 00:08:16.620 10:00:46 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:16.620 10:00:46 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:16.620 10:00:46 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:16.620 10:00:46 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:16.620 10:00:46 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:16.620 10:00:46 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:16.620 10:00:46 -- common/autotest_common.sh@904 -- # local i=0 00:08:16.620 10:00:46 -- common/autotest_common.sh@905 -- # local force 00:08:16.620 10:00:46 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:16.620 10:00:46 -- common/autotest_common.sh@910 -- # force=-f 00:08:16.620 10:00:46 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:16.620 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:16.620 = sectsz=512 attr=2, projid32bit=1 00:08:16.620 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:16.620 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:16.620 data = bsize=4096 blocks=130560, imaxpct=25 00:08:16.620 = sunit=0 swidth=0 blks 00:08:16.620 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:16.620 log =internal log bsize=4096 blocks=16384, version=2 00:08:16.620 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:16.620 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:17.186 Discarding blocks...Done. 00:08:17.186 10:00:47 -- common/autotest_common.sh@921 -- # return 0 00:08:17.186 10:00:47 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:19.716 10:00:49 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:19.716 10:00:50 -- target/filesystem.sh@25 -- # sync 00:08:19.716 10:00:50 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:19.716 10:00:50 -- target/filesystem.sh@27 -- # sync 00:08:19.716 10:00:50 -- target/filesystem.sh@29 -- # i=0 00:08:19.716 10:00:50 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:19.716 10:00:50 -- target/filesystem.sh@37 -- # kill -0 72363 00:08:19.716 10:00:50 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:19.716 10:00:50 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:19.716 10:00:50 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:19.716 10:00:50 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:19.716 00:08:19.717 real 0m3.132s 00:08:19.717 user 0m0.021s 00:08:19.717 sys 0m0.066s 00:08:19.717 10:00:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.717 10:00:50 -- common/autotest_common.sh@10 -- # set +x 00:08:19.717 ************************************ 00:08:19.717 END TEST filesystem_xfs 00:08:19.717 ************************************ 00:08:19.717 10:00:50 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:19.717 10:00:50 -- target/filesystem.sh@93 -- # sync 00:08:19.717 10:00:50 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:19.717 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:19.717 10:00:50 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:19.717 10:00:50 -- common/autotest_common.sh@1198 -- # local i=0 00:08:19.717 10:00:50 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:19.717 10:00:50 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:19.717 10:00:50 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:19.717 10:00:50 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:19.717 10:00:50 -- common/autotest_common.sh@1210 -- # return 0 00:08:19.717 10:00:50 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:19.717 10:00:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:19.717 10:00:50 -- common/autotest_common.sh@10 -- # set +x 00:08:19.717 10:00:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:19.717 10:00:50 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:19.717 10:00:50 -- target/filesystem.sh@101 -- # killprocess 72363 00:08:19.717 10:00:50 -- common/autotest_common.sh@926 -- # '[' -z 72363 ']' 00:08:19.717 10:00:50 -- common/autotest_common.sh@930 -- # kill -0 72363 00:08:19.717 10:00:50 -- common/autotest_common.sh@931 -- # uname 00:08:19.717 10:00:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:19.717 10:00:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72363 00:08:19.975 10:00:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:19.975 10:00:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:19.975 killing process with pid 72363 00:08:19.975 10:00:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72363' 00:08:19.975 10:00:50 -- common/autotest_common.sh@945 -- # kill 72363 00:08:19.975 10:00:50 -- common/autotest_common.sh@950 -- # wait 72363 00:08:20.232 10:00:50 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:20.232 00:08:20.232 real 0m9.311s 00:08:20.232 user 0m35.363s 00:08:20.232 sys 0m1.486s 00:08:20.232 10:00:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.232 10:00:50 -- common/autotest_common.sh@10 -- # set +x 00:08:20.233 ************************************ 00:08:20.233 END TEST nvmf_filesystem_no_in_capsule 00:08:20.233 ************************************ 00:08:20.233 10:00:50 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:20.233 10:00:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:20.233 10:00:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:20.233 10:00:50 -- common/autotest_common.sh@10 -- # set +x 00:08:20.233 ************************************ 00:08:20.233 START TEST nvmf_filesystem_in_capsule 00:08:20.233 ************************************ 00:08:20.233 10:00:50 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_part 4096 00:08:20.233 10:00:50 -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:20.233 10:00:50 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:20.233 10:00:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:20.233 10:00:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:20.233 10:00:50 -- common/autotest_common.sh@10 -- # set +x 00:08:20.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.233 10:00:50 -- nvmf/common.sh@469 -- # nvmfpid=72679 00:08:20.233 10:00:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:20.233 10:00:50 -- nvmf/common.sh@470 -- # waitforlisten 72679 00:08:20.233 10:00:50 -- common/autotest_common.sh@819 -- # '[' -z 72679 ']' 00:08:20.233 10:00:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.233 10:00:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:20.233 10:00:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.233 10:00:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:20.233 10:00:50 -- common/autotest_common.sh@10 -- # set +x 00:08:20.490 [2024-07-14 10:00:50.814365] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:20.490 [2024-07-14 10:00:50.814544] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.490 [2024-07-14 10:00:50.962673] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:20.747 [2024-07-14 10:00:51.064269] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:20.747 [2024-07-14 10:00:51.064438] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:20.747 [2024-07-14 10:00:51.064454] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:20.747 [2024-07-14 10:00:51.064466] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:20.747 [2024-07-14 10:00:51.064601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:20.747 [2024-07-14 10:00:51.065054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:20.747 [2024-07-14 10:00:51.065687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:20.747 [2024-07-14 10:00:51.065723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.680 10:00:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:21.680 10:00:51 -- common/autotest_common.sh@852 -- # return 0 00:08:21.680 10:00:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:21.680 10:00:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:21.680 10:00:51 -- common/autotest_common.sh@10 -- # set +x 00:08:21.680 10:00:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:21.680 10:00:51 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:21.680 10:00:51 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:21.680 10:00:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:21.680 10:00:51 -- common/autotest_common.sh@10 -- # set +x 00:08:21.680 [2024-07-14 10:00:51.928474] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:21.680 10:00:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:21.680 10:00:51 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:21.680 10:00:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:21.680 10:00:51 -- common/autotest_common.sh@10 -- # set +x 00:08:21.680 Malloc1 00:08:21.680 10:00:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:21.680 10:00:52 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:21.680 10:00:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:21.680 10:00:52 -- common/autotest_common.sh@10 -- # set +x 00:08:21.680 10:00:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:21.680 10:00:52 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:21.680 10:00:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:21.680 10:00:52 -- common/autotest_common.sh@10 -- # set +x 00:08:21.680 10:00:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:21.680 10:00:52 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:21.680 10:00:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:21.680 10:00:52 -- common/autotest_common.sh@10 -- # set +x 00:08:21.680 [2024-07-14 10:00:52.131597] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:21.680 10:00:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:21.680 10:00:52 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:21.680 10:00:52 -- common/autotest_common.sh@1357 -- # local bdev_name=Malloc1 00:08:21.680 10:00:52 -- common/autotest_common.sh@1358 -- # local bdev_info 00:08:21.680 10:00:52 -- common/autotest_common.sh@1359 -- # local bs 00:08:21.680 10:00:52 -- common/autotest_common.sh@1360 -- # local nb 00:08:21.680 10:00:52 -- common/autotest_common.sh@1361 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:21.680 10:00:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:21.680 10:00:52 -- common/autotest_common.sh@10 -- # set +x 00:08:21.680 10:00:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:21.680 10:00:52 -- common/autotest_common.sh@1361 -- # bdev_info='[ 00:08:21.680 { 00:08:21.680 "aliases": [ 00:08:21.680 "1be8e54e-bd7f-4b01-b271-f68bd3d4690c" 00:08:21.680 ], 00:08:21.680 "assigned_rate_limits": { 00:08:21.680 "r_mbytes_per_sec": 0, 00:08:21.680 "rw_ios_per_sec": 0, 00:08:21.680 "rw_mbytes_per_sec": 0, 00:08:21.680 "w_mbytes_per_sec": 0 00:08:21.680 }, 00:08:21.680 "block_size": 512, 00:08:21.680 "claim_type": "exclusive_write", 00:08:21.680 "claimed": true, 00:08:21.680 "driver_specific": {}, 00:08:21.680 "memory_domains": [ 00:08:21.680 { 00:08:21.680 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.680 "dma_device_type": 2 00:08:21.680 } 00:08:21.680 ], 00:08:21.680 "name": "Malloc1", 00:08:21.680 "num_blocks": 1048576, 00:08:21.680 "product_name": "Malloc disk", 00:08:21.680 "supported_io_types": { 00:08:21.680 "abort": true, 00:08:21.680 "compare": false, 00:08:21.680 "compare_and_write": false, 00:08:21.680 "flush": true, 00:08:21.680 "nvme_admin": false, 00:08:21.680 "nvme_io": false, 00:08:21.680 "read": true, 00:08:21.680 "reset": true, 00:08:21.680 "unmap": true, 00:08:21.680 "write": true, 00:08:21.680 "write_zeroes": true 00:08:21.680 }, 00:08:21.680 "uuid": "1be8e54e-bd7f-4b01-b271-f68bd3d4690c", 00:08:21.680 "zoned": false 00:08:21.680 } 00:08:21.680 ]' 00:08:21.680 10:00:52 -- common/autotest_common.sh@1362 -- # jq '.[] .block_size' 00:08:21.680 10:00:52 -- common/autotest_common.sh@1362 -- # bs=512 00:08:21.680 10:00:52 -- common/autotest_common.sh@1363 -- # jq '.[] .num_blocks' 00:08:21.937 10:00:52 -- common/autotest_common.sh@1363 -- # nb=1048576 00:08:21.937 10:00:52 -- common/autotest_common.sh@1366 -- # bdev_size=512 00:08:21.937 10:00:52 -- common/autotest_common.sh@1367 -- # echo 512 00:08:21.937 10:00:52 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:21.937 10:00:52 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:21.937 10:00:52 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:21.937 10:00:52 -- common/autotest_common.sh@1177 -- # local i=0 00:08:21.937 10:00:52 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:08:21.937 10:00:52 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:08:21.937 10:00:52 -- common/autotest_common.sh@1184 -- # sleep 2 00:08:24.469 10:00:54 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:08:24.469 10:00:54 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:08:24.469 10:00:54 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:08:24.469 10:00:54 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:08:24.469 10:00:54 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:08:24.469 10:00:54 -- common/autotest_common.sh@1187 -- # return 0 00:08:24.469 10:00:54 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:24.469 10:00:54 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:24.469 10:00:54 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:24.469 10:00:54 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:24.469 10:00:54 -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:24.469 10:00:54 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:24.469 10:00:54 -- setup/common.sh@80 -- # echo 536870912 00:08:24.469 10:00:54 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:24.469 10:00:54 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:24.469 10:00:54 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:24.469 10:00:54 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:24.469 10:00:54 -- target/filesystem.sh@69 -- # partprobe 00:08:24.469 10:00:54 -- target/filesystem.sh@70 -- # sleep 1 00:08:25.399 10:00:55 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:25.399 10:00:55 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:25.399 10:00:55 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:25.399 10:00:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:25.399 10:00:55 -- common/autotest_common.sh@10 -- # set +x 00:08:25.399 ************************************ 00:08:25.399 START TEST filesystem_in_capsule_ext4 00:08:25.399 ************************************ 00:08:25.399 10:00:55 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:25.399 10:00:55 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:25.399 10:00:55 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:25.399 10:00:55 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:25.399 10:00:55 -- common/autotest_common.sh@902 -- # local fstype=ext4 00:08:25.399 10:00:55 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:25.399 10:00:55 -- common/autotest_common.sh@904 -- # local i=0 00:08:25.399 10:00:55 -- common/autotest_common.sh@905 -- # local force 00:08:25.399 10:00:55 -- common/autotest_common.sh@907 -- # '[' ext4 = ext4 ']' 00:08:25.399 10:00:55 -- common/autotest_common.sh@908 -- # force=-F 00:08:25.399 10:00:55 -- common/autotest_common.sh@913 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:25.399 mke2fs 1.46.5 (30-Dec-2021) 00:08:25.399 Discarding device blocks: 0/522240 done 00:08:25.399 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:25.399 Filesystem UUID: 317aa358-6cce-46bd-93bd-54f85c298b68 00:08:25.399 Superblock backups stored on blocks: 00:08:25.399 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:25.399 00:08:25.399 Allocating group tables: 0/64 done 00:08:25.399 Writing inode tables: 0/64 done 00:08:25.399 Creating journal (8192 blocks): done 00:08:25.399 Writing superblocks and filesystem accounting information: 0/64 done 00:08:25.399 00:08:25.399 10:00:55 -- common/autotest_common.sh@921 -- # return 0 00:08:25.399 10:00:55 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:25.400 10:00:55 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:25.400 10:00:55 -- target/filesystem.sh@25 -- # sync 00:08:25.658 10:00:55 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:25.658 10:00:55 -- target/filesystem.sh@27 -- # sync 00:08:25.658 10:00:55 -- target/filesystem.sh@29 -- # i=0 00:08:25.658 10:00:55 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:25.658 10:00:55 -- target/filesystem.sh@37 -- # kill -0 72679 00:08:25.658 10:00:55 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:25.658 10:00:55 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:25.658 10:00:55 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:25.658 10:00:55 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:25.658 00:08:25.658 real 0m0.392s 00:08:25.658 user 0m0.020s 00:08:25.658 sys 0m0.070s 00:08:25.658 ************************************ 00:08:25.658 END TEST filesystem_in_capsule_ext4 00:08:25.658 ************************************ 00:08:25.658 10:00:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.658 10:00:55 -- common/autotest_common.sh@10 -- # set +x 00:08:25.658 10:00:56 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:25.658 10:00:56 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:25.658 10:00:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:25.658 10:00:56 -- common/autotest_common.sh@10 -- # set +x 00:08:25.658 ************************************ 00:08:25.658 START TEST filesystem_in_capsule_btrfs 00:08:25.658 ************************************ 00:08:25.658 10:00:56 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:25.658 10:00:56 -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:25.658 10:00:56 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:25.658 10:00:56 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:25.658 10:00:56 -- common/autotest_common.sh@902 -- # local fstype=btrfs 00:08:25.658 10:00:56 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:25.658 10:00:56 -- common/autotest_common.sh@904 -- # local i=0 00:08:25.658 10:00:56 -- common/autotest_common.sh@905 -- # local force 00:08:25.658 10:00:56 -- common/autotest_common.sh@907 -- # '[' btrfs = ext4 ']' 00:08:25.658 10:00:56 -- common/autotest_common.sh@910 -- # force=-f 00:08:25.658 10:00:56 -- common/autotest_common.sh@913 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:25.916 btrfs-progs v6.6.2 00:08:25.916 See https://btrfs.readthedocs.io for more information. 00:08:25.916 00:08:25.916 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:25.916 NOTE: several default settings have changed in version 5.15, please make sure 00:08:25.916 this does not affect your deployments: 00:08:25.916 - DUP for metadata (-m dup) 00:08:25.916 - enabled no-holes (-O no-holes) 00:08:25.916 - enabled free-space-tree (-R free-space-tree) 00:08:25.916 00:08:25.916 Label: (null) 00:08:25.916 UUID: 2b168b80-b8c1-4e61-b722-88d5290d8c48 00:08:25.916 Node size: 16384 00:08:25.916 Sector size: 4096 00:08:25.916 Filesystem size: 510.00MiB 00:08:25.916 Block group profiles: 00:08:25.916 Data: single 8.00MiB 00:08:25.916 Metadata: DUP 32.00MiB 00:08:25.916 System: DUP 8.00MiB 00:08:25.916 SSD detected: yes 00:08:25.916 Zoned device: no 00:08:25.916 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:25.916 Runtime features: free-space-tree 00:08:25.916 Checksum: crc32c 00:08:25.916 Number of devices: 1 00:08:25.916 Devices: 00:08:25.916 ID SIZE PATH 00:08:25.916 1 510.00MiB /dev/nvme0n1p1 00:08:25.916 00:08:25.916 10:00:56 -- common/autotest_common.sh@921 -- # return 0 00:08:25.916 10:00:56 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:25.916 10:00:56 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:25.916 10:00:56 -- target/filesystem.sh@25 -- # sync 00:08:25.916 10:00:56 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:25.916 10:00:56 -- target/filesystem.sh@27 -- # sync 00:08:25.916 10:00:56 -- target/filesystem.sh@29 -- # i=0 00:08:25.916 10:00:56 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:25.916 10:00:56 -- target/filesystem.sh@37 -- # kill -0 72679 00:08:25.916 10:00:56 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:25.916 10:00:56 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:25.916 10:00:56 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:25.916 10:00:56 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:25.916 ************************************ 00:08:25.916 END TEST filesystem_in_capsule_btrfs 00:08:25.916 ************************************ 00:08:25.916 00:08:25.916 real 0m0.238s 00:08:25.916 user 0m0.024s 00:08:25.916 sys 0m0.070s 00:08:25.916 10:00:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.916 10:00:56 -- common/autotest_common.sh@10 -- # set +x 00:08:25.916 10:00:56 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:25.916 10:00:56 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:25.916 10:00:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:25.916 10:00:56 -- common/autotest_common.sh@10 -- # set +x 00:08:25.916 ************************************ 00:08:25.916 START TEST filesystem_in_capsule_xfs 00:08:25.916 ************************************ 00:08:25.916 10:00:56 -- common/autotest_common.sh@1104 -- # nvmf_filesystem_create xfs nvme0n1 00:08:25.916 10:00:56 -- target/filesystem.sh@18 -- # fstype=xfs 00:08:25.916 10:00:56 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:25.916 10:00:56 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:25.916 10:00:56 -- common/autotest_common.sh@902 -- # local fstype=xfs 00:08:25.916 10:00:56 -- common/autotest_common.sh@903 -- # local dev_name=/dev/nvme0n1p1 00:08:25.916 10:00:56 -- common/autotest_common.sh@904 -- # local i=0 00:08:25.916 10:00:56 -- common/autotest_common.sh@905 -- # local force 00:08:25.916 10:00:56 -- common/autotest_common.sh@907 -- # '[' xfs = ext4 ']' 00:08:25.916 10:00:56 -- common/autotest_common.sh@910 -- # force=-f 00:08:25.916 10:00:56 -- common/autotest_common.sh@913 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:25.916 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:25.916 = sectsz=512 attr=2, projid32bit=1 00:08:25.916 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:25.916 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:25.916 data = bsize=4096 blocks=130560, imaxpct=25 00:08:25.916 = sunit=0 swidth=0 blks 00:08:25.916 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:25.916 log =internal log bsize=4096 blocks=16384, version=2 00:08:25.916 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:25.916 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:26.848 Discarding blocks...Done. 00:08:26.848 10:00:57 -- common/autotest_common.sh@921 -- # return 0 00:08:26.848 10:00:57 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:28.803 10:00:58 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:28.803 10:00:58 -- target/filesystem.sh@25 -- # sync 00:08:28.803 10:00:58 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:28.803 10:00:58 -- target/filesystem.sh@27 -- # sync 00:08:28.803 10:00:58 -- target/filesystem.sh@29 -- # i=0 00:08:28.803 10:00:58 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:28.803 10:00:58 -- target/filesystem.sh@37 -- # kill -0 72679 00:08:28.803 10:00:58 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:28.803 10:00:58 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:28.803 10:00:58 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:28.803 10:00:58 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:28.803 ************************************ 00:08:28.803 END TEST filesystem_in_capsule_xfs 00:08:28.803 ************************************ 00:08:28.803 00:08:28.803 real 0m2.657s 00:08:28.803 user 0m0.026s 00:08:28.803 sys 0m0.066s 00:08:28.803 10:00:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.803 10:00:59 -- common/autotest_common.sh@10 -- # set +x 00:08:28.803 10:00:59 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:28.803 10:00:59 -- target/filesystem.sh@93 -- # sync 00:08:28.804 10:00:59 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:28.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:28.804 10:00:59 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:28.804 10:00:59 -- common/autotest_common.sh@1198 -- # local i=0 00:08:28.804 10:00:59 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:28.804 10:00:59 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:08:28.804 10:00:59 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:08:28.804 10:00:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:28.804 10:00:59 -- common/autotest_common.sh@1210 -- # return 0 00:08:28.804 10:00:59 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:28.804 10:00:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:28.804 10:00:59 -- common/autotest_common.sh@10 -- # set +x 00:08:28.804 10:00:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:28.804 10:00:59 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:28.804 10:00:59 -- target/filesystem.sh@101 -- # killprocess 72679 00:08:28.804 10:00:59 -- common/autotest_common.sh@926 -- # '[' -z 72679 ']' 00:08:28.804 10:00:59 -- common/autotest_common.sh@930 -- # kill -0 72679 00:08:28.804 10:00:59 -- common/autotest_common.sh@931 -- # uname 00:08:28.804 10:00:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:28.804 10:00:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72679 00:08:28.804 killing process with pid 72679 00:08:28.804 10:00:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:28.804 10:00:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:28.804 10:00:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72679' 00:08:28.804 10:00:59 -- common/autotest_common.sh@945 -- # kill 72679 00:08:28.804 10:00:59 -- common/autotest_common.sh@950 -- # wait 72679 00:08:29.369 10:00:59 -- target/filesystem.sh@102 -- # nvmfpid= 00:08:29.369 00:08:29.369 real 0m8.875s 00:08:29.369 user 0m33.705s 00:08:29.369 sys 0m1.519s 00:08:29.369 10:00:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.369 ************************************ 00:08:29.369 END TEST nvmf_filesystem_in_capsule 00:08:29.369 ************************************ 00:08:29.369 10:00:59 -- common/autotest_common.sh@10 -- # set +x 00:08:29.369 10:00:59 -- target/filesystem.sh@108 -- # nvmftestfini 00:08:29.369 10:00:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:29.369 10:00:59 -- nvmf/common.sh@116 -- # sync 00:08:29.369 10:00:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:29.369 10:00:59 -- nvmf/common.sh@119 -- # set +e 00:08:29.369 10:00:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:29.369 10:00:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:29.369 rmmod nvme_tcp 00:08:29.369 rmmod nvme_fabrics 00:08:29.369 rmmod nvme_keyring 00:08:29.369 10:00:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:29.369 10:00:59 -- nvmf/common.sh@123 -- # set -e 00:08:29.369 10:00:59 -- nvmf/common.sh@124 -- # return 0 00:08:29.369 10:00:59 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:08:29.369 10:00:59 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:29.369 10:00:59 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:29.369 10:00:59 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:29.369 10:00:59 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:29.369 10:00:59 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:29.369 10:00:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.369 10:00:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:29.369 10:00:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.369 10:00:59 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:29.369 00:08:29.369 real 0m19.025s 00:08:29.369 user 1m9.290s 00:08:29.369 sys 0m3.401s 00:08:29.369 10:00:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:29.369 10:00:59 -- common/autotest_common.sh@10 -- # set +x 00:08:29.369 ************************************ 00:08:29.369 END TEST nvmf_filesystem 00:08:29.369 ************************************ 00:08:29.369 10:00:59 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:29.369 10:00:59 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:29.369 10:00:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:29.369 10:00:59 -- common/autotest_common.sh@10 -- # set +x 00:08:29.369 ************************************ 00:08:29.369 START TEST nvmf_discovery 00:08:29.369 ************************************ 00:08:29.369 10:00:59 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:29.369 * Looking for test storage... 00:08:29.628 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:29.628 10:00:59 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:29.628 10:00:59 -- nvmf/common.sh@7 -- # uname -s 00:08:29.628 10:00:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.628 10:00:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.628 10:00:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.628 10:00:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.628 10:00:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.628 10:00:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.628 10:00:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.628 10:00:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.628 10:00:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.628 10:00:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.628 10:00:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:08:29.628 10:00:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:08:29.628 10:00:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.628 10:00:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.628 10:00:59 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:29.628 10:00:59 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:29.628 10:00:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.628 10:00:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.628 10:00:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.628 10:00:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.628 10:00:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.629 10:00:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.629 10:00:59 -- paths/export.sh@5 -- # export PATH 00:08:29.629 10:00:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.629 10:00:59 -- nvmf/common.sh@46 -- # : 0 00:08:29.629 10:00:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:29.629 10:00:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:29.629 10:00:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:29.629 10:00:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.629 10:00:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.629 10:00:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:29.629 10:00:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:29.629 10:00:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:29.629 10:00:59 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:29.629 10:00:59 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:29.629 10:00:59 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:29.629 10:00:59 -- target/discovery.sh@15 -- # hash nvme 00:08:29.629 10:00:59 -- target/discovery.sh@20 -- # nvmftestinit 00:08:29.629 10:00:59 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:29.629 10:00:59 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.629 10:00:59 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:29.629 10:00:59 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:29.629 10:00:59 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:29.629 10:00:59 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.629 10:00:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:29.629 10:00:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.629 10:00:59 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:29.629 10:00:59 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:29.629 10:00:59 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:29.629 10:00:59 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:29.629 10:00:59 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:29.629 10:00:59 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:29.629 10:00:59 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:29.629 10:00:59 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:29.629 10:00:59 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:29.629 10:00:59 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:29.629 10:00:59 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:29.629 10:00:59 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:29.629 10:00:59 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:29.629 10:00:59 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:29.629 10:00:59 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:29.629 10:00:59 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:29.629 10:00:59 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:29.629 10:00:59 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:29.629 10:00:59 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:29.629 10:00:59 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:29.629 Cannot find device "nvmf_tgt_br" 00:08:29.629 10:00:59 -- nvmf/common.sh@154 -- # true 00:08:29.629 10:00:59 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:29.629 Cannot find device "nvmf_tgt_br2" 00:08:29.629 10:00:59 -- nvmf/common.sh@155 -- # true 00:08:29.629 10:00:59 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:29.629 10:01:00 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:29.629 Cannot find device "nvmf_tgt_br" 00:08:29.629 10:01:00 -- nvmf/common.sh@157 -- # true 00:08:29.629 10:01:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:29.629 Cannot find device "nvmf_tgt_br2" 00:08:29.629 10:01:00 -- nvmf/common.sh@158 -- # true 00:08:29.629 10:01:00 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:29.629 10:01:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:29.629 10:01:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:29.629 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:29.629 10:01:00 -- nvmf/common.sh@161 -- # true 00:08:29.629 10:01:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:29.629 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:29.629 10:01:00 -- nvmf/common.sh@162 -- # true 00:08:29.629 10:01:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:29.629 10:01:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:29.629 10:01:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:29.629 10:01:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:29.629 10:01:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:29.892 10:01:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:29.892 10:01:00 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:29.892 10:01:00 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:29.892 10:01:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:29.892 10:01:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:29.892 10:01:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:29.892 10:01:00 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:29.892 10:01:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:29.892 10:01:00 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:29.892 10:01:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:29.892 10:01:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:29.892 10:01:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:29.892 10:01:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:29.892 10:01:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:29.893 10:01:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:29.893 10:01:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:29.893 10:01:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:29.893 10:01:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:29.893 10:01:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:29.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:29.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:08:29.893 00:08:29.893 --- 10.0.0.2 ping statistics --- 00:08:29.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.893 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:08:29.893 10:01:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:29.893 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:29.893 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:08:29.893 00:08:29.893 --- 10.0.0.3 ping statistics --- 00:08:29.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.893 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:08:29.893 10:01:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:29.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:29.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.059 ms 00:08:29.893 00:08:29.893 --- 10.0.0.1 ping statistics --- 00:08:29.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.893 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:08:29.893 10:01:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:29.893 10:01:00 -- nvmf/common.sh@421 -- # return 0 00:08:29.893 10:01:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:29.893 10:01:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:29.893 10:01:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:29.893 10:01:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:29.893 10:01:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:29.893 10:01:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:29.893 10:01:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:29.893 10:01:00 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:29.893 10:01:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:29.893 10:01:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:29.893 10:01:00 -- common/autotest_common.sh@10 -- # set +x 00:08:29.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.894 10:01:00 -- nvmf/common.sh@469 -- # nvmfpid=73133 00:08:29.894 10:01:00 -- nvmf/common.sh@470 -- # waitforlisten 73133 00:08:29.894 10:01:00 -- common/autotest_common.sh@819 -- # '[' -z 73133 ']' 00:08:29.894 10:01:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:29.894 10:01:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.894 10:01:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:29.894 10:01:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.894 10:01:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:29.894 10:01:00 -- common/autotest_common.sh@10 -- # set +x 00:08:29.894 [2024-07-14 10:01:00.434878] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:29.894 [2024-07-14 10:01:00.435017] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.157 [2024-07-14 10:01:00.576263] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:30.157 [2024-07-14 10:01:00.646451] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:30.157 [2024-07-14 10:01:00.646934] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.157 [2024-07-14 10:01:00.647081] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.157 [2024-07-14 10:01:00.647353] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.157 [2024-07-14 10:01:00.647581] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.157 [2024-07-14 10:01:00.647680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.157 [2024-07-14 10:01:00.647762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:30.157 [2024-07-14 10:01:00.647764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.087 10:01:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:31.087 10:01:01 -- common/autotest_common.sh@852 -- # return 0 00:08:31.087 10:01:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:31.087 10:01:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:31.087 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.087 10:01:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:31.087 10:01:01 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:31.087 10:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.087 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.087 [2024-07-14 10:01:01.505748] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:31.087 10:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.087 10:01:01 -- target/discovery.sh@26 -- # seq 1 4 00:08:31.087 10:01:01 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:31.087 10:01:01 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:31.087 10:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.087 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.087 Null1 00:08:31.087 10:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.087 10:01:01 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:31.087 10:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.087 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.087 10:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.087 10:01:01 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:31.087 10:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.087 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.087 10:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.087 10:01:01 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:31.087 10:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.087 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.087 [2024-07-14 10:01:01.582054] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:31.087 10:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.087 10:01:01 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:31.087 10:01:01 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:31.087 10:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.087 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.087 Null2 00:08:31.087 10:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.087 10:01:01 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:31.087 10:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.087 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.087 10:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.087 10:01:01 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:31.087 10:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.087 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.087 10:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.087 10:01:01 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:31.087 10:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.087 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.344 10:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.344 10:01:01 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:31.344 10:01:01 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:31.344 10:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.344 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.344 Null3 00:08:31.344 10:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.344 10:01:01 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:31.344 10:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.344 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.344 10:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.344 10:01:01 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:31.344 10:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.344 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.344 10:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.344 10:01:01 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:31.344 10:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.344 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.344 10:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.344 10:01:01 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:31.344 10:01:01 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:31.344 10:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.344 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.344 Null4 00:08:31.344 10:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.344 10:01:01 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:31.344 10:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.344 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.344 10:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.344 10:01:01 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:31.344 10:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.344 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.344 10:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.344 10:01:01 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:31.344 10:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.344 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.344 10:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.344 10:01:01 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:31.344 10:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.344 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.344 10:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.344 10:01:01 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:31.344 10:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.344 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.344 10:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.344 10:01:01 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -a 10.0.0.2 -s 4420 00:08:31.344 00:08:31.344 Discovery Log Number of Records 6, Generation counter 6 00:08:31.344 =====Discovery Log Entry 0====== 00:08:31.344 trtype: tcp 00:08:31.344 adrfam: ipv4 00:08:31.344 subtype: current discovery subsystem 00:08:31.344 treq: not required 00:08:31.344 portid: 0 00:08:31.344 trsvcid: 4420 00:08:31.344 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:31.344 traddr: 10.0.0.2 00:08:31.344 eflags: explicit discovery connections, duplicate discovery information 00:08:31.344 sectype: none 00:08:31.344 =====Discovery Log Entry 1====== 00:08:31.344 trtype: tcp 00:08:31.344 adrfam: ipv4 00:08:31.344 subtype: nvme subsystem 00:08:31.344 treq: not required 00:08:31.344 portid: 0 00:08:31.344 trsvcid: 4420 00:08:31.344 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:31.344 traddr: 10.0.0.2 00:08:31.344 eflags: none 00:08:31.344 sectype: none 00:08:31.344 =====Discovery Log Entry 2====== 00:08:31.344 trtype: tcp 00:08:31.344 adrfam: ipv4 00:08:31.344 subtype: nvme subsystem 00:08:31.344 treq: not required 00:08:31.344 portid: 0 00:08:31.344 trsvcid: 4420 00:08:31.344 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:31.344 traddr: 10.0.0.2 00:08:31.344 eflags: none 00:08:31.344 sectype: none 00:08:31.344 =====Discovery Log Entry 3====== 00:08:31.344 trtype: tcp 00:08:31.344 adrfam: ipv4 00:08:31.344 subtype: nvme subsystem 00:08:31.344 treq: not required 00:08:31.344 portid: 0 00:08:31.344 trsvcid: 4420 00:08:31.344 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:31.344 traddr: 10.0.0.2 00:08:31.344 eflags: none 00:08:31.344 sectype: none 00:08:31.344 =====Discovery Log Entry 4====== 00:08:31.344 trtype: tcp 00:08:31.344 adrfam: ipv4 00:08:31.344 subtype: nvme subsystem 00:08:31.344 treq: not required 00:08:31.344 portid: 0 00:08:31.344 trsvcid: 4420 00:08:31.344 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:31.344 traddr: 10.0.0.2 00:08:31.344 eflags: none 00:08:31.344 sectype: none 00:08:31.344 =====Discovery Log Entry 5====== 00:08:31.344 trtype: tcp 00:08:31.344 adrfam: ipv4 00:08:31.344 subtype: discovery subsystem referral 00:08:31.344 treq: not required 00:08:31.344 portid: 0 00:08:31.344 trsvcid: 4430 00:08:31.344 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:31.344 traddr: 10.0.0.2 00:08:31.344 eflags: none 00:08:31.344 sectype: none 00:08:31.344 Perform nvmf subsystem discovery via RPC 00:08:31.344 10:01:01 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:31.344 10:01:01 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:31.344 10:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.344 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.344 [2024-07-14 10:01:01.830122] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:08:31.344 [ 00:08:31.344 { 00:08:31.344 "allow_any_host": true, 00:08:31.344 "hosts": [], 00:08:31.345 "listen_addresses": [ 00:08:31.345 { 00:08:31.345 "adrfam": "IPv4", 00:08:31.345 "traddr": "10.0.0.2", 00:08:31.345 "transport": "TCP", 00:08:31.345 "trsvcid": "4420", 00:08:31.345 "trtype": "TCP" 00:08:31.345 } 00:08:31.345 ], 00:08:31.345 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:31.345 "subtype": "Discovery" 00:08:31.345 }, 00:08:31.345 { 00:08:31.345 "allow_any_host": true, 00:08:31.345 "hosts": [], 00:08:31.345 "listen_addresses": [ 00:08:31.345 { 00:08:31.345 "adrfam": "IPv4", 00:08:31.345 "traddr": "10.0.0.2", 00:08:31.345 "transport": "TCP", 00:08:31.345 "trsvcid": "4420", 00:08:31.345 "trtype": "TCP" 00:08:31.345 } 00:08:31.345 ], 00:08:31.345 "max_cntlid": 65519, 00:08:31.345 "max_namespaces": 32, 00:08:31.345 "min_cntlid": 1, 00:08:31.345 "model_number": "SPDK bdev Controller", 00:08:31.345 "namespaces": [ 00:08:31.345 { 00:08:31.345 "bdev_name": "Null1", 00:08:31.345 "name": "Null1", 00:08:31.345 "nguid": "CF1616E461F54A688EF6867B701E5EA9", 00:08:31.345 "nsid": 1, 00:08:31.345 "uuid": "cf1616e4-61f5-4a68-8ef6-867b701e5ea9" 00:08:31.345 } 00:08:31.345 ], 00:08:31.345 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:31.345 "serial_number": "SPDK00000000000001", 00:08:31.345 "subtype": "NVMe" 00:08:31.345 }, 00:08:31.345 { 00:08:31.345 "allow_any_host": true, 00:08:31.345 "hosts": [], 00:08:31.345 "listen_addresses": [ 00:08:31.345 { 00:08:31.345 "adrfam": "IPv4", 00:08:31.345 "traddr": "10.0.0.2", 00:08:31.345 "transport": "TCP", 00:08:31.345 "trsvcid": "4420", 00:08:31.345 "trtype": "TCP" 00:08:31.345 } 00:08:31.345 ], 00:08:31.345 "max_cntlid": 65519, 00:08:31.345 "max_namespaces": 32, 00:08:31.345 "min_cntlid": 1, 00:08:31.345 "model_number": "SPDK bdev Controller", 00:08:31.345 "namespaces": [ 00:08:31.345 { 00:08:31.345 "bdev_name": "Null2", 00:08:31.345 "name": "Null2", 00:08:31.345 "nguid": "D58235C3D7AF467FB0B5BB320617C38D", 00:08:31.345 "nsid": 1, 00:08:31.345 "uuid": "d58235c3-d7af-467f-b0b5-bb320617c38d" 00:08:31.345 } 00:08:31.345 ], 00:08:31.345 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:31.345 "serial_number": "SPDK00000000000002", 00:08:31.345 "subtype": "NVMe" 00:08:31.345 }, 00:08:31.345 { 00:08:31.345 "allow_any_host": true, 00:08:31.345 "hosts": [], 00:08:31.345 "listen_addresses": [ 00:08:31.345 { 00:08:31.345 "adrfam": "IPv4", 00:08:31.345 "traddr": "10.0.0.2", 00:08:31.345 "transport": "TCP", 00:08:31.345 "trsvcid": "4420", 00:08:31.345 "trtype": "TCP" 00:08:31.345 } 00:08:31.345 ], 00:08:31.345 "max_cntlid": 65519, 00:08:31.345 "max_namespaces": 32, 00:08:31.345 "min_cntlid": 1, 00:08:31.345 "model_number": "SPDK bdev Controller", 00:08:31.345 "namespaces": [ 00:08:31.345 { 00:08:31.345 "bdev_name": "Null3", 00:08:31.345 "name": "Null3", 00:08:31.345 "nguid": "C32466D046064C0CBDB7A7B6F99D33EE", 00:08:31.345 "nsid": 1, 00:08:31.345 "uuid": "c32466d0-4606-4c0c-bdb7-a7b6f99d33ee" 00:08:31.345 } 00:08:31.345 ], 00:08:31.345 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:31.345 "serial_number": "SPDK00000000000003", 00:08:31.345 "subtype": "NVMe" 00:08:31.345 }, 00:08:31.345 { 00:08:31.345 "allow_any_host": true, 00:08:31.345 "hosts": [], 00:08:31.345 "listen_addresses": [ 00:08:31.345 { 00:08:31.345 "adrfam": "IPv4", 00:08:31.345 "traddr": "10.0.0.2", 00:08:31.345 "transport": "TCP", 00:08:31.345 "trsvcid": "4420", 00:08:31.345 "trtype": "TCP" 00:08:31.345 } 00:08:31.345 ], 00:08:31.345 "max_cntlid": 65519, 00:08:31.345 "max_namespaces": 32, 00:08:31.345 "min_cntlid": 1, 00:08:31.345 "model_number": "SPDK bdev Controller", 00:08:31.345 "namespaces": [ 00:08:31.345 { 00:08:31.345 "bdev_name": "Null4", 00:08:31.345 "name": "Null4", 00:08:31.345 "nguid": "577DBF254B024EB4A320FEAF013CBBAA", 00:08:31.345 "nsid": 1, 00:08:31.345 "uuid": "577dbf25-4b02-4eb4-a320-feaf013cbbaa" 00:08:31.345 } 00:08:31.345 ], 00:08:31.345 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:31.345 "serial_number": "SPDK00000000000004", 00:08:31.345 "subtype": "NVMe" 00:08:31.345 } 00:08:31.345 ] 00:08:31.345 10:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.345 10:01:01 -- target/discovery.sh@42 -- # seq 1 4 00:08:31.345 10:01:01 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:31.345 10:01:01 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:31.345 10:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.345 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.345 10:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.345 10:01:01 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:31.345 10:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.345 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.603 10:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.603 10:01:01 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:31.603 10:01:01 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:31.603 10:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.603 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.603 10:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.603 10:01:01 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:31.603 10:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.603 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.603 10:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.603 10:01:01 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:31.603 10:01:01 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:31.603 10:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.603 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.603 10:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.603 10:01:01 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:31.603 10:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.603 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.603 10:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.603 10:01:01 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:31.603 10:01:01 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:31.603 10:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.603 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.603 10:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.603 10:01:01 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:31.603 10:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.603 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.603 10:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.603 10:01:01 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:31.603 10:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.603 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.603 10:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.603 10:01:01 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:31.603 10:01:01 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:31.603 10:01:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:31.603 10:01:01 -- common/autotest_common.sh@10 -- # set +x 00:08:31.603 10:01:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:31.603 10:01:02 -- target/discovery.sh@49 -- # check_bdevs= 00:08:31.603 10:01:02 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:31.603 10:01:02 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:31.603 10:01:02 -- target/discovery.sh@57 -- # nvmftestfini 00:08:31.603 10:01:02 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:31.603 10:01:02 -- nvmf/common.sh@116 -- # sync 00:08:31.603 10:01:02 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:31.603 10:01:02 -- nvmf/common.sh@119 -- # set +e 00:08:31.603 10:01:02 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:31.603 10:01:02 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:31.603 rmmod nvme_tcp 00:08:31.603 rmmod nvme_fabrics 00:08:31.603 rmmod nvme_keyring 00:08:31.603 10:01:02 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:31.603 10:01:02 -- nvmf/common.sh@123 -- # set -e 00:08:31.603 10:01:02 -- nvmf/common.sh@124 -- # return 0 00:08:31.603 10:01:02 -- nvmf/common.sh@477 -- # '[' -n 73133 ']' 00:08:31.603 10:01:02 -- nvmf/common.sh@478 -- # killprocess 73133 00:08:31.603 10:01:02 -- common/autotest_common.sh@926 -- # '[' -z 73133 ']' 00:08:31.603 10:01:02 -- common/autotest_common.sh@930 -- # kill -0 73133 00:08:31.603 10:01:02 -- common/autotest_common.sh@931 -- # uname 00:08:31.603 10:01:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:31.603 10:01:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73133 00:08:31.603 killing process with pid 73133 00:08:31.603 10:01:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:31.603 10:01:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:31.603 10:01:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73133' 00:08:31.603 10:01:02 -- common/autotest_common.sh@945 -- # kill 73133 00:08:31.603 [2024-07-14 10:01:02.136313] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:08:31.603 10:01:02 -- common/autotest_common.sh@950 -- # wait 73133 00:08:31.861 10:01:02 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:31.861 10:01:02 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:31.861 10:01:02 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:31.861 10:01:02 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:31.861 10:01:02 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:31.861 10:01:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:31.861 10:01:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:31.861 10:01:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.861 10:01:02 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:31.861 00:08:31.861 real 0m2.535s 00:08:31.861 user 0m7.113s 00:08:31.861 sys 0m0.644s 00:08:31.861 10:01:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:31.861 ************************************ 00:08:31.861 END TEST nvmf_discovery 00:08:31.861 ************************************ 00:08:31.861 10:01:02 -- common/autotest_common.sh@10 -- # set +x 00:08:32.120 10:01:02 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:32.120 10:01:02 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:32.120 10:01:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:32.120 10:01:02 -- common/autotest_common.sh@10 -- # set +x 00:08:32.120 ************************************ 00:08:32.120 START TEST nvmf_referrals 00:08:32.120 ************************************ 00:08:32.120 10:01:02 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:32.120 * Looking for test storage... 00:08:32.120 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:32.120 10:01:02 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:32.120 10:01:02 -- nvmf/common.sh@7 -- # uname -s 00:08:32.120 10:01:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:32.120 10:01:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:32.120 10:01:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:32.120 10:01:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:32.120 10:01:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:32.120 10:01:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:32.120 10:01:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:32.120 10:01:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:32.120 10:01:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:32.120 10:01:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:32.120 10:01:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:08:32.120 10:01:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:08:32.120 10:01:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:32.120 10:01:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:32.120 10:01:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:32.120 10:01:02 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:32.120 10:01:02 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.120 10:01:02 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.120 10:01:02 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.120 10:01:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.120 10:01:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.120 10:01:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.120 10:01:02 -- paths/export.sh@5 -- # export PATH 00:08:32.120 10:01:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.120 10:01:02 -- nvmf/common.sh@46 -- # : 0 00:08:32.120 10:01:02 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:32.120 10:01:02 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:32.120 10:01:02 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:32.120 10:01:02 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:32.120 10:01:02 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:32.120 10:01:02 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:32.120 10:01:02 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:32.120 10:01:02 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:32.120 10:01:02 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:32.120 10:01:02 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:32.120 10:01:02 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:32.120 10:01:02 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:32.120 10:01:02 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:32.120 10:01:02 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:32.120 10:01:02 -- target/referrals.sh@37 -- # nvmftestinit 00:08:32.120 10:01:02 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:32.120 10:01:02 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:32.120 10:01:02 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:32.120 10:01:02 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:32.120 10:01:02 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:32.120 10:01:02 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.120 10:01:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:32.120 10:01:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.120 10:01:02 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:32.120 10:01:02 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:32.120 10:01:02 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:32.120 10:01:02 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:32.120 10:01:02 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:32.120 10:01:02 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:32.120 10:01:02 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:32.120 10:01:02 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:32.120 10:01:02 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:32.120 10:01:02 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:32.120 10:01:02 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:32.120 10:01:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:32.120 10:01:02 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:32.120 10:01:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:32.120 10:01:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:32.120 10:01:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:32.120 10:01:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:32.120 10:01:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:32.120 10:01:02 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:32.120 10:01:02 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:32.121 Cannot find device "nvmf_tgt_br" 00:08:32.121 10:01:02 -- nvmf/common.sh@154 -- # true 00:08:32.121 10:01:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:32.121 Cannot find device "nvmf_tgt_br2" 00:08:32.121 10:01:02 -- nvmf/common.sh@155 -- # true 00:08:32.121 10:01:02 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:32.121 10:01:02 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:32.121 Cannot find device "nvmf_tgt_br" 00:08:32.121 10:01:02 -- nvmf/common.sh@157 -- # true 00:08:32.121 10:01:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:32.121 Cannot find device "nvmf_tgt_br2" 00:08:32.121 10:01:02 -- nvmf/common.sh@158 -- # true 00:08:32.121 10:01:02 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:32.121 10:01:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:32.378 10:01:02 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:32.378 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:32.378 10:01:02 -- nvmf/common.sh@161 -- # true 00:08:32.378 10:01:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:32.378 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:32.378 10:01:02 -- nvmf/common.sh@162 -- # true 00:08:32.378 10:01:02 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:32.378 10:01:02 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:32.378 10:01:02 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:32.378 10:01:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:32.378 10:01:02 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:32.378 10:01:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:32.378 10:01:02 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:32.379 10:01:02 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:32.379 10:01:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:32.379 10:01:02 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:32.379 10:01:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:32.379 10:01:02 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:32.379 10:01:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:32.379 10:01:02 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:32.379 10:01:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:32.379 10:01:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:32.379 10:01:02 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:32.379 10:01:02 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:32.379 10:01:02 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:32.379 10:01:02 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:32.379 10:01:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:32.379 10:01:02 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:32.379 10:01:02 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:32.379 10:01:02 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:32.379 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:32.379 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:08:32.379 00:08:32.379 --- 10.0.0.2 ping statistics --- 00:08:32.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.379 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:08:32.379 10:01:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:32.379 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:32.379 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:08:32.379 00:08:32.379 --- 10.0.0.3 ping statistics --- 00:08:32.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.379 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:08:32.379 10:01:02 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:32.379 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:32.379 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:08:32.379 00:08:32.379 --- 10.0.0.1 ping statistics --- 00:08:32.379 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:32.379 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:08:32.379 10:01:02 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:32.379 10:01:02 -- nvmf/common.sh@421 -- # return 0 00:08:32.379 10:01:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:32.379 10:01:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:32.379 10:01:02 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:32.379 10:01:02 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:32.379 10:01:02 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:32.379 10:01:02 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:32.379 10:01:02 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:32.379 10:01:02 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:32.379 10:01:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:32.379 10:01:02 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:32.379 10:01:02 -- common/autotest_common.sh@10 -- # set +x 00:08:32.379 10:01:02 -- nvmf/common.sh@469 -- # nvmfpid=73356 00:08:32.379 10:01:02 -- nvmf/common.sh@470 -- # waitforlisten 73356 00:08:32.379 10:01:02 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:32.379 10:01:02 -- common/autotest_common.sh@819 -- # '[' -z 73356 ']' 00:08:32.379 10:01:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.379 10:01:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:32.379 10:01:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.379 10:01:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:32.379 10:01:02 -- common/autotest_common.sh@10 -- # set +x 00:08:32.636 [2024-07-14 10:01:02.964200] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:32.636 [2024-07-14 10:01:02.964249] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.636 [2024-07-14 10:01:03.104865] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:32.636 [2024-07-14 10:01:03.171783] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:32.636 [2024-07-14 10:01:03.171937] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:32.636 [2024-07-14 10:01:03.171950] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:32.636 [2024-07-14 10:01:03.171959] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:32.636 [2024-07-14 10:01:03.173091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.636 [2024-07-14 10:01:03.173280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.636 [2024-07-14 10:01:03.173425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.636 [2024-07-14 10:01:03.173421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:33.567 10:01:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:33.567 10:01:03 -- common/autotest_common.sh@852 -- # return 0 00:08:33.567 10:01:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:33.567 10:01:03 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:33.567 10:01:03 -- common/autotest_common.sh@10 -- # set +x 00:08:33.567 10:01:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:33.567 10:01:04 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:33.567 10:01:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.567 10:01:04 -- common/autotest_common.sh@10 -- # set +x 00:08:33.567 [2024-07-14 10:01:04.035170] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:33.567 10:01:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.567 10:01:04 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:33.567 10:01:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.567 10:01:04 -- common/autotest_common.sh@10 -- # set +x 00:08:33.567 [2024-07-14 10:01:04.058766] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:33.567 10:01:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.567 10:01:04 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:33.567 10:01:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.567 10:01:04 -- common/autotest_common.sh@10 -- # set +x 00:08:33.567 10:01:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.567 10:01:04 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:33.567 10:01:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.567 10:01:04 -- common/autotest_common.sh@10 -- # set +x 00:08:33.567 10:01:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.567 10:01:04 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:33.567 10:01:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.567 10:01:04 -- common/autotest_common.sh@10 -- # set +x 00:08:33.567 10:01:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.567 10:01:04 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:33.567 10:01:04 -- target/referrals.sh@48 -- # jq length 00:08:33.567 10:01:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.567 10:01:04 -- common/autotest_common.sh@10 -- # set +x 00:08:33.567 10:01:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.825 10:01:04 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:33.825 10:01:04 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:33.825 10:01:04 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:33.825 10:01:04 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:33.825 10:01:04 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:33.825 10:01:04 -- target/referrals.sh@21 -- # sort 00:08:33.825 10:01:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.825 10:01:04 -- common/autotest_common.sh@10 -- # set +x 00:08:33.825 10:01:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.825 10:01:04 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:33.825 10:01:04 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:33.825 10:01:04 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:33.825 10:01:04 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:33.825 10:01:04 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:33.825 10:01:04 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:33.825 10:01:04 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:33.825 10:01:04 -- target/referrals.sh@26 -- # sort 00:08:33.825 10:01:04 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:33.825 10:01:04 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:33.825 10:01:04 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:33.825 10:01:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.825 10:01:04 -- common/autotest_common.sh@10 -- # set +x 00:08:33.825 10:01:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.825 10:01:04 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:33.825 10:01:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.825 10:01:04 -- common/autotest_common.sh@10 -- # set +x 00:08:33.825 10:01:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.825 10:01:04 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:33.825 10:01:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.825 10:01:04 -- common/autotest_common.sh@10 -- # set +x 00:08:33.825 10:01:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:33.825 10:01:04 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:33.825 10:01:04 -- target/referrals.sh@56 -- # jq length 00:08:33.825 10:01:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:33.825 10:01:04 -- common/autotest_common.sh@10 -- # set +x 00:08:33.825 10:01:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.083 10:01:04 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:34.083 10:01:04 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:34.083 10:01:04 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:34.083 10:01:04 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:34.083 10:01:04 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.083 10:01:04 -- target/referrals.sh@26 -- # sort 00:08:34.083 10:01:04 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:34.083 10:01:04 -- target/referrals.sh@26 -- # echo 00:08:34.083 10:01:04 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:34.083 10:01:04 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:34.083 10:01:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.083 10:01:04 -- common/autotest_common.sh@10 -- # set +x 00:08:34.083 10:01:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.083 10:01:04 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:34.083 10:01:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.083 10:01:04 -- common/autotest_common.sh@10 -- # set +x 00:08:34.083 10:01:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.083 10:01:04 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:34.083 10:01:04 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:34.083 10:01:04 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:34.083 10:01:04 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:34.083 10:01:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.083 10:01:04 -- common/autotest_common.sh@10 -- # set +x 00:08:34.083 10:01:04 -- target/referrals.sh@21 -- # sort 00:08:34.083 10:01:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.083 10:01:04 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:34.083 10:01:04 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:34.083 10:01:04 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:34.083 10:01:04 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:34.083 10:01:04 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:34.083 10:01:04 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:34.083 10:01:04 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.083 10:01:04 -- target/referrals.sh@26 -- # sort 00:08:34.083 10:01:04 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:34.083 10:01:04 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:34.083 10:01:04 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:34.083 10:01:04 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:34.083 10:01:04 -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:34.083 10:01:04 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:34.083 10:01:04 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.341 10:01:04 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:34.341 10:01:04 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:34.341 10:01:04 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:34.341 10:01:04 -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:34.341 10:01:04 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.341 10:01:04 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:34.341 10:01:04 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:34.341 10:01:04 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:34.341 10:01:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.341 10:01:04 -- common/autotest_common.sh@10 -- # set +x 00:08:34.341 10:01:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.341 10:01:04 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:34.341 10:01:04 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:34.341 10:01:04 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:34.341 10:01:04 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:34.341 10:01:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.341 10:01:04 -- target/referrals.sh@21 -- # sort 00:08:34.341 10:01:04 -- common/autotest_common.sh@10 -- # set +x 00:08:34.341 10:01:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.341 10:01:04 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:34.341 10:01:04 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:34.341 10:01:04 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:34.341 10:01:04 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:34.341 10:01:04 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:34.341 10:01:04 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.341 10:01:04 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:34.342 10:01:04 -- target/referrals.sh@26 -- # sort 00:08:34.598 10:01:04 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:34.599 10:01:04 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:34.599 10:01:04 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:34.599 10:01:04 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:34.599 10:01:04 -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:34.599 10:01:04 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.599 10:01:04 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:34.599 10:01:04 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:34.599 10:01:04 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:34.599 10:01:04 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:34.599 10:01:04 -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:34.599 10:01:04 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.599 10:01:04 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:34.599 10:01:05 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:34.599 10:01:05 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:34.599 10:01:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.599 10:01:05 -- common/autotest_common.sh@10 -- # set +x 00:08:34.599 10:01:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.599 10:01:05 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:34.599 10:01:05 -- target/referrals.sh@82 -- # jq length 00:08:34.599 10:01:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:34.599 10:01:05 -- common/autotest_common.sh@10 -- # set +x 00:08:34.599 10:01:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:34.599 10:01:05 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:34.599 10:01:05 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:34.599 10:01:05 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:34.599 10:01:05 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:34.599 10:01:05 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:34.599 10:01:05 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:34.599 10:01:05 -- target/referrals.sh@26 -- # sort 00:08:34.856 10:01:05 -- target/referrals.sh@26 -- # echo 00:08:34.856 10:01:05 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:34.856 10:01:05 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:34.856 10:01:05 -- target/referrals.sh@86 -- # nvmftestfini 00:08:34.856 10:01:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:34.856 10:01:05 -- nvmf/common.sh@116 -- # sync 00:08:34.856 10:01:05 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:34.856 10:01:05 -- nvmf/common.sh@119 -- # set +e 00:08:34.856 10:01:05 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:34.856 10:01:05 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:34.856 rmmod nvme_tcp 00:08:34.856 rmmod nvme_fabrics 00:08:34.856 rmmod nvme_keyring 00:08:34.856 10:01:05 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:34.856 10:01:05 -- nvmf/common.sh@123 -- # set -e 00:08:34.856 10:01:05 -- nvmf/common.sh@124 -- # return 0 00:08:34.856 10:01:05 -- nvmf/common.sh@477 -- # '[' -n 73356 ']' 00:08:34.856 10:01:05 -- nvmf/common.sh@478 -- # killprocess 73356 00:08:34.856 10:01:05 -- common/autotest_common.sh@926 -- # '[' -z 73356 ']' 00:08:34.856 10:01:05 -- common/autotest_common.sh@930 -- # kill -0 73356 00:08:34.856 10:01:05 -- common/autotest_common.sh@931 -- # uname 00:08:34.856 10:01:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:34.856 10:01:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73356 00:08:34.856 10:01:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:34.856 10:01:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:34.856 killing process with pid 73356 00:08:34.856 10:01:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73356' 00:08:34.856 10:01:05 -- common/autotest_common.sh@945 -- # kill 73356 00:08:34.856 10:01:05 -- common/autotest_common.sh@950 -- # wait 73356 00:08:35.113 10:01:05 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:35.113 10:01:05 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:35.113 10:01:05 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:35.113 10:01:05 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:35.113 10:01:05 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:35.113 10:01:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.113 10:01:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.113 10:01:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.113 10:01:05 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:35.113 00:08:35.113 real 0m3.154s 00:08:35.113 user 0m10.347s 00:08:35.113 sys 0m0.943s 00:08:35.113 10:01:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:35.113 10:01:05 -- common/autotest_common.sh@10 -- # set +x 00:08:35.113 ************************************ 00:08:35.113 END TEST nvmf_referrals 00:08:35.113 ************************************ 00:08:35.113 10:01:05 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:35.113 10:01:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:35.113 10:01:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:35.113 10:01:05 -- common/autotest_common.sh@10 -- # set +x 00:08:35.113 ************************************ 00:08:35.113 START TEST nvmf_connect_disconnect 00:08:35.113 ************************************ 00:08:35.113 10:01:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:35.370 * Looking for test storage... 00:08:35.370 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:35.370 10:01:05 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:35.370 10:01:05 -- nvmf/common.sh@7 -- # uname -s 00:08:35.370 10:01:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:35.370 10:01:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:35.370 10:01:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:35.370 10:01:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:35.370 10:01:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:35.370 10:01:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:35.370 10:01:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:35.370 10:01:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:35.370 10:01:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:35.370 10:01:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:35.370 10:01:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:08:35.370 10:01:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:08:35.370 10:01:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:35.370 10:01:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:35.370 10:01:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:35.370 10:01:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:35.370 10:01:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:35.370 10:01:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:35.370 10:01:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:35.370 10:01:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.371 10:01:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.371 10:01:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.371 10:01:05 -- paths/export.sh@5 -- # export PATH 00:08:35.371 10:01:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:35.371 10:01:05 -- nvmf/common.sh@46 -- # : 0 00:08:35.371 10:01:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:35.371 10:01:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:35.371 10:01:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:35.371 10:01:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:35.371 10:01:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:35.371 10:01:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:35.371 10:01:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:35.371 10:01:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:35.371 10:01:05 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:35.371 10:01:05 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:35.371 10:01:05 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:35.371 10:01:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:35.371 10:01:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:35.371 10:01:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:35.371 10:01:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:35.371 10:01:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:35.371 10:01:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:35.371 10:01:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:35.371 10:01:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:35.371 10:01:05 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:35.371 10:01:05 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:35.371 10:01:05 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:35.371 10:01:05 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:35.371 10:01:05 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:35.371 10:01:05 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:35.371 10:01:05 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:35.371 10:01:05 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:35.371 10:01:05 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:35.371 10:01:05 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:35.371 10:01:05 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:35.371 10:01:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:35.371 10:01:05 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:35.371 10:01:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:35.371 10:01:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:35.371 10:01:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:35.371 10:01:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:35.371 10:01:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:35.371 10:01:05 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:35.371 10:01:05 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:35.371 Cannot find device "nvmf_tgt_br" 00:08:35.371 10:01:05 -- nvmf/common.sh@154 -- # true 00:08:35.371 10:01:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:35.371 Cannot find device "nvmf_tgt_br2" 00:08:35.371 10:01:05 -- nvmf/common.sh@155 -- # true 00:08:35.371 10:01:05 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:35.371 10:01:05 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:35.371 Cannot find device "nvmf_tgt_br" 00:08:35.371 10:01:05 -- nvmf/common.sh@157 -- # true 00:08:35.371 10:01:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:35.371 Cannot find device "nvmf_tgt_br2" 00:08:35.371 10:01:05 -- nvmf/common.sh@158 -- # true 00:08:35.371 10:01:05 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:35.371 10:01:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:35.371 10:01:05 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:35.371 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:35.371 10:01:05 -- nvmf/common.sh@161 -- # true 00:08:35.371 10:01:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:35.371 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:35.371 10:01:05 -- nvmf/common.sh@162 -- # true 00:08:35.371 10:01:05 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:35.629 10:01:05 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:35.629 10:01:05 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:35.629 10:01:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:35.629 10:01:05 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:35.629 10:01:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:35.629 10:01:06 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:35.629 10:01:06 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:35.629 10:01:06 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:35.629 10:01:06 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:35.629 10:01:06 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:35.629 10:01:06 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:35.629 10:01:06 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:35.629 10:01:06 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:35.629 10:01:06 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:35.629 10:01:06 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:35.629 10:01:06 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:35.629 10:01:06 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:35.629 10:01:06 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:35.629 10:01:06 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:35.629 10:01:06 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:35.629 10:01:06 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:35.629 10:01:06 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:35.629 10:01:06 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:35.629 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:35.629 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:08:35.629 00:08:35.629 --- 10.0.0.2 ping statistics --- 00:08:35.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.629 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:08:35.629 10:01:06 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:35.629 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:35.629 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:08:35.629 00:08:35.629 --- 10.0.0.3 ping statistics --- 00:08:35.629 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.629 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:08:35.629 10:01:06 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:35.887 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:35.887 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.047 ms 00:08:35.887 00:08:35.887 --- 10.0.0.1 ping statistics --- 00:08:35.887 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.887 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:08:35.887 10:01:06 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:35.887 10:01:06 -- nvmf/common.sh@421 -- # return 0 00:08:35.887 10:01:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:35.887 10:01:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:35.887 10:01:06 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:35.887 10:01:06 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:35.887 10:01:06 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:35.887 10:01:06 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:35.887 10:01:06 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:35.887 10:01:06 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:35.887 10:01:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:35.887 10:01:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:35.887 10:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:35.887 10:01:06 -- nvmf/common.sh@469 -- # nvmfpid=73665 00:08:35.887 10:01:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:35.887 10:01:06 -- nvmf/common.sh@470 -- # waitforlisten 73665 00:08:35.887 10:01:06 -- common/autotest_common.sh@819 -- # '[' -z 73665 ']' 00:08:35.887 10:01:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.887 10:01:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:35.887 10:01:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.887 10:01:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:35.887 10:01:06 -- common/autotest_common.sh@10 -- # set +x 00:08:35.887 [2024-07-14 10:01:06.276244] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:08:35.887 [2024-07-14 10:01:06.276366] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.887 [2024-07-14 10:01:06.421733] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:36.145 [2024-07-14 10:01:06.526713] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:36.145 [2024-07-14 10:01:06.527171] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.145 [2024-07-14 10:01:06.527263] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.145 [2024-07-14 10:01:06.527425] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.145 [2024-07-14 10:01:06.527639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.145 [2024-07-14 10:01:06.528146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.145 [2024-07-14 10:01:06.528556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.145 [2024-07-14 10:01:06.528842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:37.075 10:01:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:37.075 10:01:07 -- common/autotest_common.sh@852 -- # return 0 00:08:37.076 10:01:07 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:37.076 10:01:07 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:37.076 10:01:07 -- common/autotest_common.sh@10 -- # set +x 00:08:37.076 10:01:07 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:37.076 10:01:07 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:37.076 10:01:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.076 10:01:07 -- common/autotest_common.sh@10 -- # set +x 00:08:37.076 [2024-07-14 10:01:07.393598] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:37.076 10:01:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.076 10:01:07 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:37.076 10:01:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.076 10:01:07 -- common/autotest_common.sh@10 -- # set +x 00:08:37.076 10:01:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.076 10:01:07 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:37.076 10:01:07 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:37.076 10:01:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.076 10:01:07 -- common/autotest_common.sh@10 -- # set +x 00:08:37.076 10:01:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.076 10:01:07 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:37.076 10:01:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.076 10:01:07 -- common/autotest_common.sh@10 -- # set +x 00:08:37.076 10:01:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.076 10:01:07 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:37.076 10:01:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:37.076 10:01:07 -- common/autotest_common.sh@10 -- # set +x 00:08:37.076 [2024-07-14 10:01:07.472158] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:37.076 10:01:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:37.076 10:01:07 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:37.076 10:01:07 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:37.076 10:01:07 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:37.076 10:01:07 -- target/connect_disconnect.sh@34 -- # set +x 00:08:39.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:41.514 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:45.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:48.476 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.001 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:52.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:55.454 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.365 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:59.893 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.797 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:04.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:06.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.764 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:10.658 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:13.191 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:17.619 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:20.141 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.034 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:24.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:29.066 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.969 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.507 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.826 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.363 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.263 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.796 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.251 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.153 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:02.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:04.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:07.223 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:09.126 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:11.655 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:13.556 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:16.088 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:18.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:23.053 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.954 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.483 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.421 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:33.848 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:36.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.831 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.292 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.192 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.149 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.142 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:07.495 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.924 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.448 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.904 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:20.804 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:23.337 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.239 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.731 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:36.630 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:39.159 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.058 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:43.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:46.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:48.030 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:52.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.881 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.411 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.771 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:08.314 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.216 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.748 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.644 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.077 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.631 10:04:51 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:21.631 10:04:51 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:21.631 10:04:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:21.631 10:04:51 -- nvmf/common.sh@116 -- # sync 00:12:21.631 10:04:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:21.631 10:04:51 -- nvmf/common.sh@119 -- # set +e 00:12:21.631 10:04:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:21.631 10:04:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:21.631 rmmod nvme_tcp 00:12:21.631 rmmod nvme_fabrics 00:12:21.631 rmmod nvme_keyring 00:12:21.631 10:04:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:21.631 10:04:51 -- nvmf/common.sh@123 -- # set -e 00:12:21.631 10:04:51 -- nvmf/common.sh@124 -- # return 0 00:12:21.631 10:04:51 -- nvmf/common.sh@477 -- # '[' -n 73665 ']' 00:12:21.631 10:04:51 -- nvmf/common.sh@478 -- # killprocess 73665 00:12:21.631 10:04:51 -- common/autotest_common.sh@926 -- # '[' -z 73665 ']' 00:12:21.631 10:04:51 -- common/autotest_common.sh@930 -- # kill -0 73665 00:12:21.631 10:04:51 -- common/autotest_common.sh@931 -- # uname 00:12:21.631 10:04:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:21.631 10:04:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73665 00:12:21.631 10:04:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:21.631 10:04:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:21.631 killing process with pid 73665 00:12:21.631 10:04:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73665' 00:12:21.631 10:04:51 -- common/autotest_common.sh@945 -- # kill 73665 00:12:21.631 10:04:51 -- common/autotest_common.sh@950 -- # wait 73665 00:12:21.631 10:04:52 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:21.631 10:04:52 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:21.631 10:04:52 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:21.631 10:04:52 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:21.631 10:04:52 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:21.631 10:04:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.631 10:04:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:21.631 10:04:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.631 10:04:52 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:21.631 00:12:21.631 real 3m46.490s 00:12:21.631 user 14m43.258s 00:12:21.631 sys 0m20.856s 00:12:21.631 10:04:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:21.631 10:04:52 -- common/autotest_common.sh@10 -- # set +x 00:12:21.631 ************************************ 00:12:21.631 END TEST nvmf_connect_disconnect 00:12:21.631 ************************************ 00:12:21.890 10:04:52 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:21.890 10:04:52 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:21.890 10:04:52 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:21.890 10:04:52 -- common/autotest_common.sh@10 -- # set +x 00:12:21.890 ************************************ 00:12:21.890 START TEST nvmf_multitarget 00:12:21.890 ************************************ 00:12:21.890 10:04:52 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:21.890 * Looking for test storage... 00:12:21.890 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:21.890 10:04:52 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:21.890 10:04:52 -- nvmf/common.sh@7 -- # uname -s 00:12:21.890 10:04:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:21.890 10:04:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:21.890 10:04:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:21.890 10:04:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:21.890 10:04:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:21.890 10:04:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:21.890 10:04:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:21.890 10:04:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:21.890 10:04:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:21.890 10:04:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:21.890 10:04:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:12:21.890 10:04:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:12:21.890 10:04:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:21.890 10:04:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:21.890 10:04:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:21.890 10:04:52 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:21.890 10:04:52 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.890 10:04:52 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.890 10:04:52 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.890 10:04:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.890 10:04:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.890 10:04:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.890 10:04:52 -- paths/export.sh@5 -- # export PATH 00:12:21.890 10:04:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.890 10:04:52 -- nvmf/common.sh@46 -- # : 0 00:12:21.890 10:04:52 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:21.890 10:04:52 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:21.890 10:04:52 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:21.890 10:04:52 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:21.890 10:04:52 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:21.890 10:04:52 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:21.890 10:04:52 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:21.890 10:04:52 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:21.890 10:04:52 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:21.890 10:04:52 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:21.890 10:04:52 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:21.890 10:04:52 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:21.890 10:04:52 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:21.890 10:04:52 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:21.890 10:04:52 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:21.890 10:04:52 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.890 10:04:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:21.890 10:04:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.890 10:04:52 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:21.890 10:04:52 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:21.890 10:04:52 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:21.890 10:04:52 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:21.890 10:04:52 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:21.890 10:04:52 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:21.890 10:04:52 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:21.890 10:04:52 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:21.890 10:04:52 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:21.890 10:04:52 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:21.890 10:04:52 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:21.890 10:04:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:21.890 10:04:52 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:21.890 10:04:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:21.890 10:04:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:21.890 10:04:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:21.890 10:04:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:21.890 10:04:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:21.890 10:04:52 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:21.890 10:04:52 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:21.890 Cannot find device "nvmf_tgt_br" 00:12:21.890 10:04:52 -- nvmf/common.sh@154 -- # true 00:12:21.890 10:04:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:21.890 Cannot find device "nvmf_tgt_br2" 00:12:21.890 10:04:52 -- nvmf/common.sh@155 -- # true 00:12:21.890 10:04:52 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:21.890 10:04:52 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:21.890 Cannot find device "nvmf_tgt_br" 00:12:21.890 10:04:52 -- nvmf/common.sh@157 -- # true 00:12:21.890 10:04:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:21.891 Cannot find device "nvmf_tgt_br2" 00:12:21.891 10:04:52 -- nvmf/common.sh@158 -- # true 00:12:21.891 10:04:52 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:21.891 10:04:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:21.891 10:04:52 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:21.891 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:21.891 10:04:52 -- nvmf/common.sh@161 -- # true 00:12:21.891 10:04:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:21.891 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:21.891 10:04:52 -- nvmf/common.sh@162 -- # true 00:12:21.891 10:04:52 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:22.150 10:04:52 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:22.150 10:04:52 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:22.150 10:04:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:22.150 10:04:52 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:22.150 10:04:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:22.150 10:04:52 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:22.150 10:04:52 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:22.150 10:04:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:22.150 10:04:52 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:22.150 10:04:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:22.150 10:04:52 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:22.150 10:04:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:22.150 10:04:52 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:22.150 10:04:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:22.150 10:04:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:22.150 10:04:52 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:22.150 10:04:52 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:22.150 10:04:52 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:22.150 10:04:52 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:22.150 10:04:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:22.150 10:04:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:22.150 10:04:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:22.150 10:04:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:22.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:22.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:12:22.150 00:12:22.150 --- 10.0.0.2 ping statistics --- 00:12:22.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.150 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:12:22.150 10:04:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:22.150 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:22.150 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:12:22.150 00:12:22.150 --- 10.0.0.3 ping statistics --- 00:12:22.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.150 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:12:22.150 10:04:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:22.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:22.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:12:22.150 00:12:22.150 --- 10.0.0.1 ping statistics --- 00:12:22.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:22.150 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:12:22.150 10:04:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:22.150 10:04:52 -- nvmf/common.sh@421 -- # return 0 00:12:22.150 10:04:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:22.150 10:04:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:22.150 10:04:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:22.150 10:04:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:22.150 10:04:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:22.150 10:04:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:22.150 10:04:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:22.150 10:04:52 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:22.150 10:04:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:22.150 10:04:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:22.150 10:04:52 -- common/autotest_common.sh@10 -- # set +x 00:12:22.150 10:04:52 -- nvmf/common.sh@469 -- # nvmfpid=77447 00:12:22.150 10:04:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:22.150 10:04:52 -- nvmf/common.sh@470 -- # waitforlisten 77447 00:12:22.150 10:04:52 -- common/autotest_common.sh@819 -- # '[' -z 77447 ']' 00:12:22.150 10:04:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:22.150 10:04:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:22.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:22.150 10:04:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:22.150 10:04:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:22.150 10:04:52 -- common/autotest_common.sh@10 -- # set +x 00:12:22.150 [2024-07-14 10:04:52.683785] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:22.150 [2024-07-14 10:04:52.683881] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:22.408 [2024-07-14 10:04:52.824254] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:22.408 [2024-07-14 10:04:52.901324] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:22.408 [2024-07-14 10:04:52.901750] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:22.408 [2024-07-14 10:04:52.901891] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:22.408 [2024-07-14 10:04:52.902062] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:22.408 [2024-07-14 10:04:52.902306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:22.408 [2024-07-14 10:04:52.902392] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:22.408 [2024-07-14 10:04:52.902465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:22.408 [2024-07-14 10:04:52.902467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.341 10:04:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:23.341 10:04:53 -- common/autotest_common.sh@852 -- # return 0 00:12:23.341 10:04:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:23.341 10:04:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:23.341 10:04:53 -- common/autotest_common.sh@10 -- # set +x 00:12:23.341 10:04:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.341 10:04:53 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:23.341 10:04:53 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:23.341 10:04:53 -- target/multitarget.sh@21 -- # jq length 00:12:23.341 10:04:53 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:23.341 10:04:53 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:23.598 "nvmf_tgt_1" 00:12:23.598 10:04:53 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:23.598 "nvmf_tgt_2" 00:12:23.598 10:04:54 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:23.598 10:04:54 -- target/multitarget.sh@28 -- # jq length 00:12:23.857 10:04:54 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:23.857 10:04:54 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:23.857 true 00:12:23.857 10:04:54 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:23.857 true 00:12:23.857 10:04:54 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:23.857 10:04:54 -- target/multitarget.sh@35 -- # jq length 00:12:24.115 10:04:54 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:24.115 10:04:54 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:24.115 10:04:54 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:24.115 10:04:54 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:24.115 10:04:54 -- nvmf/common.sh@116 -- # sync 00:12:24.115 10:04:54 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:24.115 10:04:54 -- nvmf/common.sh@119 -- # set +e 00:12:24.116 10:04:54 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:24.116 10:04:54 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:24.116 rmmod nvme_tcp 00:12:24.116 rmmod nvme_fabrics 00:12:24.116 rmmod nvme_keyring 00:12:24.116 10:04:54 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:24.116 10:04:54 -- nvmf/common.sh@123 -- # set -e 00:12:24.116 10:04:54 -- nvmf/common.sh@124 -- # return 0 00:12:24.116 10:04:54 -- nvmf/common.sh@477 -- # '[' -n 77447 ']' 00:12:24.116 10:04:54 -- nvmf/common.sh@478 -- # killprocess 77447 00:12:24.116 10:04:54 -- common/autotest_common.sh@926 -- # '[' -z 77447 ']' 00:12:24.116 10:04:54 -- common/autotest_common.sh@930 -- # kill -0 77447 00:12:24.116 10:04:54 -- common/autotest_common.sh@931 -- # uname 00:12:24.116 10:04:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:24.116 10:04:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77447 00:12:24.116 10:04:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:24.116 10:04:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:24.116 killing process with pid 77447 00:12:24.116 10:04:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77447' 00:12:24.116 10:04:54 -- common/autotest_common.sh@945 -- # kill 77447 00:12:24.116 10:04:54 -- common/autotest_common.sh@950 -- # wait 77447 00:12:24.375 10:04:54 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:24.375 10:04:54 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:24.375 10:04:54 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:24.375 10:04:54 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:24.375 10:04:54 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:24.375 10:04:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.375 10:04:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:24.375 10:04:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.375 10:04:54 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:24.375 00:12:24.375 real 0m2.631s 00:12:24.375 user 0m8.592s 00:12:24.375 sys 0m0.674s 00:12:24.375 10:04:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:24.375 10:04:54 -- common/autotest_common.sh@10 -- # set +x 00:12:24.375 ************************************ 00:12:24.375 END TEST nvmf_multitarget 00:12:24.375 ************************************ 00:12:24.375 10:04:54 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:24.375 10:04:54 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:24.375 10:04:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:24.375 10:04:54 -- common/autotest_common.sh@10 -- # set +x 00:12:24.375 ************************************ 00:12:24.375 START TEST nvmf_rpc 00:12:24.375 ************************************ 00:12:24.375 10:04:54 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:24.634 * Looking for test storage... 00:12:24.634 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:24.634 10:04:54 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:24.634 10:04:54 -- nvmf/common.sh@7 -- # uname -s 00:12:24.634 10:04:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.634 10:04:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.634 10:04:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.634 10:04:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.634 10:04:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.634 10:04:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.634 10:04:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.634 10:04:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.634 10:04:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.634 10:04:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.634 10:04:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:12:24.634 10:04:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:12:24.634 10:04:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.634 10:04:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.634 10:04:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:24.634 10:04:54 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:24.634 10:04:54 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.634 10:04:54 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.634 10:04:54 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.635 10:04:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.635 10:04:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.635 10:04:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.635 10:04:54 -- paths/export.sh@5 -- # export PATH 00:12:24.635 10:04:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.635 10:04:54 -- nvmf/common.sh@46 -- # : 0 00:12:24.635 10:04:54 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:24.635 10:04:54 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:24.635 10:04:54 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:24.635 10:04:54 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.635 10:04:54 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.635 10:04:54 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:24.635 10:04:54 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:24.635 10:04:54 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:24.635 10:04:54 -- target/rpc.sh@11 -- # loops=5 00:12:24.635 10:04:54 -- target/rpc.sh@23 -- # nvmftestinit 00:12:24.635 10:04:54 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:24.635 10:04:54 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:24.635 10:04:54 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:24.635 10:04:54 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:24.635 10:04:54 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:24.635 10:04:54 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.635 10:04:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:24.635 10:04:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.635 10:04:54 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:24.635 10:04:54 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:24.635 10:04:54 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:24.635 10:04:54 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:24.635 10:04:54 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:24.635 10:04:54 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:24.635 10:04:54 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:24.635 10:04:54 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:24.635 10:04:54 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:24.635 10:04:54 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:24.635 10:04:54 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:24.635 10:04:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:24.635 10:04:54 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:24.635 10:04:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:24.635 10:04:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:24.635 10:04:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:24.635 10:04:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:24.635 10:04:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:24.635 10:04:54 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:24.635 10:04:55 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:24.635 Cannot find device "nvmf_tgt_br" 00:12:24.635 10:04:55 -- nvmf/common.sh@154 -- # true 00:12:24.635 10:04:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:24.635 Cannot find device "nvmf_tgt_br2" 00:12:24.635 10:04:55 -- nvmf/common.sh@155 -- # true 00:12:24.635 10:04:55 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:24.635 10:04:55 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:24.635 Cannot find device "nvmf_tgt_br" 00:12:24.635 10:04:55 -- nvmf/common.sh@157 -- # true 00:12:24.635 10:04:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:24.635 Cannot find device "nvmf_tgt_br2" 00:12:24.635 10:04:55 -- nvmf/common.sh@158 -- # true 00:12:24.635 10:04:55 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:24.635 10:04:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:24.635 10:04:55 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:24.635 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:24.635 10:04:55 -- nvmf/common.sh@161 -- # true 00:12:24.635 10:04:55 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:24.635 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:24.635 10:04:55 -- nvmf/common.sh@162 -- # true 00:12:24.635 10:04:55 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:24.635 10:04:55 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:24.635 10:04:55 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:24.635 10:04:55 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:24.635 10:04:55 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:24.893 10:04:55 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:24.893 10:04:55 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:24.893 10:04:55 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:24.893 10:04:55 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:24.893 10:04:55 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:24.893 10:04:55 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:24.893 10:04:55 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:24.893 10:04:55 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:24.893 10:04:55 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:24.893 10:04:55 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:24.893 10:04:55 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:24.893 10:04:55 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:24.893 10:04:55 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:24.893 10:04:55 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:24.893 10:04:55 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:24.893 10:04:55 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:24.893 10:04:55 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:24.893 10:04:55 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:24.893 10:04:55 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:24.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:24.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:12:24.893 00:12:24.893 --- 10.0.0.2 ping statistics --- 00:12:24.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.893 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:12:24.893 10:04:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:24.893 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:24.893 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:12:24.893 00:12:24.893 --- 10.0.0.3 ping statistics --- 00:12:24.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.893 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:12:24.893 10:04:55 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:24.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:24.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:12:24.893 00:12:24.893 --- 10.0.0.1 ping statistics --- 00:12:24.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:24.893 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:12:24.893 10:04:55 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:24.893 10:04:55 -- nvmf/common.sh@421 -- # return 0 00:12:24.893 10:04:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:24.893 10:04:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:24.893 10:04:55 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:24.893 10:04:55 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:24.893 10:04:55 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:24.893 10:04:55 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:24.893 10:04:55 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:24.893 10:04:55 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:24.893 10:04:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:24.893 10:04:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:24.893 10:04:55 -- common/autotest_common.sh@10 -- # set +x 00:12:24.893 10:04:55 -- nvmf/common.sh@469 -- # nvmfpid=77668 00:12:24.893 10:04:55 -- nvmf/common.sh@470 -- # waitforlisten 77668 00:12:24.893 10:04:55 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:24.893 10:04:55 -- common/autotest_common.sh@819 -- # '[' -z 77668 ']' 00:12:24.893 10:04:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:24.893 10:04:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:24.893 10:04:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:24.893 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:24.893 10:04:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:24.893 10:04:55 -- common/autotest_common.sh@10 -- # set +x 00:12:24.893 [2024-07-14 10:04:55.410782] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:24.893 [2024-07-14 10:04:55.411455] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.151 [2024-07-14 10:04:55.551295] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:25.151 [2024-07-14 10:04:55.628024] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:25.151 [2024-07-14 10:04:55.628179] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.151 [2024-07-14 10:04:55.628192] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.151 [2024-07-14 10:04:55.628200] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.151 [2024-07-14 10:04:55.628369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.151 [2024-07-14 10:04:55.628463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.151 [2024-07-14 10:04:55.629087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.151 [2024-07-14 10:04:55.629136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.136 10:04:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:26.136 10:04:56 -- common/autotest_common.sh@852 -- # return 0 00:12:26.136 10:04:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:26.136 10:04:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:26.136 10:04:56 -- common/autotest_common.sh@10 -- # set +x 00:12:26.136 10:04:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.136 10:04:56 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:26.136 10:04:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.136 10:04:56 -- common/autotest_common.sh@10 -- # set +x 00:12:26.136 10:04:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.136 10:04:56 -- target/rpc.sh@26 -- # stats='{ 00:12:26.136 "poll_groups": [ 00:12:26.136 { 00:12:26.136 "admin_qpairs": 0, 00:12:26.136 "completed_nvme_io": 0, 00:12:26.136 "current_admin_qpairs": 0, 00:12:26.136 "current_io_qpairs": 0, 00:12:26.136 "io_qpairs": 0, 00:12:26.136 "name": "nvmf_tgt_poll_group_0", 00:12:26.136 "pending_bdev_io": 0, 00:12:26.136 "transports": [] 00:12:26.136 }, 00:12:26.136 { 00:12:26.136 "admin_qpairs": 0, 00:12:26.136 "completed_nvme_io": 0, 00:12:26.136 "current_admin_qpairs": 0, 00:12:26.136 "current_io_qpairs": 0, 00:12:26.136 "io_qpairs": 0, 00:12:26.136 "name": "nvmf_tgt_poll_group_1", 00:12:26.136 "pending_bdev_io": 0, 00:12:26.136 "transports": [] 00:12:26.136 }, 00:12:26.136 { 00:12:26.136 "admin_qpairs": 0, 00:12:26.136 "completed_nvme_io": 0, 00:12:26.136 "current_admin_qpairs": 0, 00:12:26.136 "current_io_qpairs": 0, 00:12:26.136 "io_qpairs": 0, 00:12:26.136 "name": "nvmf_tgt_poll_group_2", 00:12:26.136 "pending_bdev_io": 0, 00:12:26.136 "transports": [] 00:12:26.136 }, 00:12:26.136 { 00:12:26.136 "admin_qpairs": 0, 00:12:26.136 "completed_nvme_io": 0, 00:12:26.136 "current_admin_qpairs": 0, 00:12:26.136 "current_io_qpairs": 0, 00:12:26.136 "io_qpairs": 0, 00:12:26.136 "name": "nvmf_tgt_poll_group_3", 00:12:26.136 "pending_bdev_io": 0, 00:12:26.136 "transports": [] 00:12:26.136 } 00:12:26.136 ], 00:12:26.136 "tick_rate": 2200000000 00:12:26.136 }' 00:12:26.136 10:04:56 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:26.136 10:04:56 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:26.136 10:04:56 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:26.136 10:04:56 -- target/rpc.sh@15 -- # wc -l 00:12:26.136 10:04:56 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:26.136 10:04:56 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:26.136 10:04:56 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:26.136 10:04:56 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:26.136 10:04:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.136 10:04:56 -- common/autotest_common.sh@10 -- # set +x 00:12:26.136 [2024-07-14 10:04:56.570006] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:26.136 10:04:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.136 10:04:56 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:26.136 10:04:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.136 10:04:56 -- common/autotest_common.sh@10 -- # set +x 00:12:26.136 10:04:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.136 10:04:56 -- target/rpc.sh@33 -- # stats='{ 00:12:26.136 "poll_groups": [ 00:12:26.136 { 00:12:26.136 "admin_qpairs": 0, 00:12:26.136 "completed_nvme_io": 0, 00:12:26.136 "current_admin_qpairs": 0, 00:12:26.136 "current_io_qpairs": 0, 00:12:26.136 "io_qpairs": 0, 00:12:26.136 "name": "nvmf_tgt_poll_group_0", 00:12:26.136 "pending_bdev_io": 0, 00:12:26.136 "transports": [ 00:12:26.136 { 00:12:26.136 "trtype": "TCP" 00:12:26.136 } 00:12:26.136 ] 00:12:26.136 }, 00:12:26.136 { 00:12:26.136 "admin_qpairs": 0, 00:12:26.136 "completed_nvme_io": 0, 00:12:26.136 "current_admin_qpairs": 0, 00:12:26.136 "current_io_qpairs": 0, 00:12:26.136 "io_qpairs": 0, 00:12:26.136 "name": "nvmf_tgt_poll_group_1", 00:12:26.136 "pending_bdev_io": 0, 00:12:26.136 "transports": [ 00:12:26.136 { 00:12:26.136 "trtype": "TCP" 00:12:26.136 } 00:12:26.136 ] 00:12:26.136 }, 00:12:26.136 { 00:12:26.136 "admin_qpairs": 0, 00:12:26.136 "completed_nvme_io": 0, 00:12:26.136 "current_admin_qpairs": 0, 00:12:26.136 "current_io_qpairs": 0, 00:12:26.136 "io_qpairs": 0, 00:12:26.136 "name": "nvmf_tgt_poll_group_2", 00:12:26.136 "pending_bdev_io": 0, 00:12:26.136 "transports": [ 00:12:26.136 { 00:12:26.136 "trtype": "TCP" 00:12:26.136 } 00:12:26.136 ] 00:12:26.136 }, 00:12:26.136 { 00:12:26.136 "admin_qpairs": 0, 00:12:26.136 "completed_nvme_io": 0, 00:12:26.136 "current_admin_qpairs": 0, 00:12:26.136 "current_io_qpairs": 0, 00:12:26.136 "io_qpairs": 0, 00:12:26.136 "name": "nvmf_tgt_poll_group_3", 00:12:26.136 "pending_bdev_io": 0, 00:12:26.136 "transports": [ 00:12:26.136 { 00:12:26.136 "trtype": "TCP" 00:12:26.136 } 00:12:26.136 ] 00:12:26.136 } 00:12:26.136 ], 00:12:26.136 "tick_rate": 2200000000 00:12:26.136 }' 00:12:26.136 10:04:56 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:26.136 10:04:56 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:26.136 10:04:56 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:26.136 10:04:56 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:26.136 10:04:56 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:26.136 10:04:56 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:26.136 10:04:56 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:26.136 10:04:56 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:26.136 10:04:56 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:26.394 10:04:56 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:26.394 10:04:56 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:26.394 10:04:56 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:26.394 10:04:56 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:26.394 10:04:56 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:26.394 10:04:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.394 10:04:56 -- common/autotest_common.sh@10 -- # set +x 00:12:26.394 Malloc1 00:12:26.394 10:04:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.394 10:04:56 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:26.394 10:04:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.394 10:04:56 -- common/autotest_common.sh@10 -- # set +x 00:12:26.394 10:04:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.394 10:04:56 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:26.394 10:04:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.394 10:04:56 -- common/autotest_common.sh@10 -- # set +x 00:12:26.394 10:04:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.394 10:04:56 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:26.394 10:04:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.394 10:04:56 -- common/autotest_common.sh@10 -- # set +x 00:12:26.394 10:04:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.394 10:04:56 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.394 10:04:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.394 10:04:56 -- common/autotest_common.sh@10 -- # set +x 00:12:26.394 [2024-07-14 10:04:56.774826] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.394 10:04:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.394 10:04:56 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 -a 10.0.0.2 -s 4420 00:12:26.394 10:04:56 -- common/autotest_common.sh@640 -- # local es=0 00:12:26.394 10:04:56 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 -a 10.0.0.2 -s 4420 00:12:26.394 10:04:56 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:26.394 10:04:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:26.394 10:04:56 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:26.394 10:04:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:26.394 10:04:56 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:26.394 10:04:56 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:26.394 10:04:56 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:26.394 10:04:56 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:26.394 10:04:56 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 -a 10.0.0.2 -s 4420 00:12:26.394 [2024-07-14 10:04:56.803112] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31' 00:12:26.394 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:26.394 could not add new controller: failed to write to nvme-fabrics device 00:12:26.394 10:04:56 -- common/autotest_common.sh@643 -- # es=1 00:12:26.394 10:04:56 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:26.394 10:04:56 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:26.394 10:04:56 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:26.394 10:04:56 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:12:26.394 10:04:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:26.394 10:04:56 -- common/autotest_common.sh@10 -- # set +x 00:12:26.394 10:04:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:26.394 10:04:56 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:26.652 10:04:56 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:26.652 10:04:56 -- common/autotest_common.sh@1177 -- # local i=0 00:12:26.652 10:04:56 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:26.652 10:04:56 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:26.652 10:04:56 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:28.552 10:04:58 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:28.552 10:04:58 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:28.552 10:04:58 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:28.552 10:04:59 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:28.552 10:04:59 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:28.552 10:04:59 -- common/autotest_common.sh@1187 -- # return 0 00:12:28.552 10:04:59 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:28.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.552 10:04:59 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:28.552 10:04:59 -- common/autotest_common.sh@1198 -- # local i=0 00:12:28.552 10:04:59 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:28.552 10:04:59 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:28.552 10:04:59 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:28.552 10:04:59 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:28.552 10:04:59 -- common/autotest_common.sh@1210 -- # return 0 00:12:28.552 10:04:59 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:12:28.552 10:04:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:28.552 10:04:59 -- common/autotest_common.sh@10 -- # set +x 00:12:28.552 10:04:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:28.552 10:04:59 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:28.552 10:04:59 -- common/autotest_common.sh@640 -- # local es=0 00:12:28.552 10:04:59 -- common/autotest_common.sh@642 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:28.552 10:04:59 -- common/autotest_common.sh@628 -- # local arg=nvme 00:12:28.552 10:04:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:28.552 10:04:59 -- common/autotest_common.sh@632 -- # type -t nvme 00:12:28.552 10:04:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:28.552 10:04:59 -- common/autotest_common.sh@634 -- # type -P nvme 00:12:28.552 10:04:59 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:28.552 10:04:59 -- common/autotest_common.sh@634 -- # arg=/usr/sbin/nvme 00:12:28.552 10:04:59 -- common/autotest_common.sh@634 -- # [[ -x /usr/sbin/nvme ]] 00:12:28.553 10:04:59 -- common/autotest_common.sh@643 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:28.812 [2024-07-14 10:04:59.104780] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31' 00:12:28.812 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:28.812 could not add new controller: failed to write to nvme-fabrics device 00:12:28.812 10:04:59 -- common/autotest_common.sh@643 -- # es=1 00:12:28.812 10:04:59 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:28.812 10:04:59 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:28.812 10:04:59 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:28.812 10:04:59 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:28.812 10:04:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:28.812 10:04:59 -- common/autotest_common.sh@10 -- # set +x 00:12:28.812 10:04:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:28.812 10:04:59 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:28.812 10:04:59 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:28.812 10:04:59 -- common/autotest_common.sh@1177 -- # local i=0 00:12:28.812 10:04:59 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:28.812 10:04:59 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:28.812 10:04:59 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:31.348 10:05:01 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:31.348 10:05:01 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:31.348 10:05:01 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:31.348 10:05:01 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:31.348 10:05:01 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:31.348 10:05:01 -- common/autotest_common.sh@1187 -- # return 0 00:12:31.348 10:05:01 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:31.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.348 10:05:01 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:31.348 10:05:01 -- common/autotest_common.sh@1198 -- # local i=0 00:12:31.348 10:05:01 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:31.348 10:05:01 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.348 10:05:01 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:31.348 10:05:01 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:31.348 10:05:01 -- common/autotest_common.sh@1210 -- # return 0 00:12:31.348 10:05:01 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:31.348 10:05:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:31.348 10:05:01 -- common/autotest_common.sh@10 -- # set +x 00:12:31.348 10:05:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:31.348 10:05:01 -- target/rpc.sh@81 -- # seq 1 5 00:12:31.348 10:05:01 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:31.348 10:05:01 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:31.348 10:05:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:31.348 10:05:01 -- common/autotest_common.sh@10 -- # set +x 00:12:31.348 10:05:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:31.348 10:05:01 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:31.348 10:05:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:31.348 10:05:01 -- common/autotest_common.sh@10 -- # set +x 00:12:31.348 [2024-07-14 10:05:01.397643] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:31.348 10:05:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:31.348 10:05:01 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:31.348 10:05:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:31.348 10:05:01 -- common/autotest_common.sh@10 -- # set +x 00:12:31.348 10:05:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:31.348 10:05:01 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:31.348 10:05:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:31.348 10:05:01 -- common/autotest_common.sh@10 -- # set +x 00:12:31.348 10:05:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:31.348 10:05:01 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:31.348 10:05:01 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:31.348 10:05:01 -- common/autotest_common.sh@1177 -- # local i=0 00:12:31.348 10:05:01 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:31.348 10:05:01 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:31.348 10:05:01 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:33.250 10:05:03 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:33.250 10:05:03 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:33.250 10:05:03 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.250 10:05:03 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:33.250 10:05:03 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.250 10:05:03 -- common/autotest_common.sh@1187 -- # return 0 00:12:33.250 10:05:03 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:33.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.250 10:05:03 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:33.250 10:05:03 -- common/autotest_common.sh@1198 -- # local i=0 00:12:33.250 10:05:03 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:33.250 10:05:03 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.250 10:05:03 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:33.250 10:05:03 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.250 10:05:03 -- common/autotest_common.sh@1210 -- # return 0 00:12:33.250 10:05:03 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:33.250 10:05:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:33.250 10:05:03 -- common/autotest_common.sh@10 -- # set +x 00:12:33.250 10:05:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:33.250 10:05:03 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.250 10:05:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:33.250 10:05:03 -- common/autotest_common.sh@10 -- # set +x 00:12:33.250 10:05:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:33.250 10:05:03 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:33.250 10:05:03 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:33.250 10:05:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:33.250 10:05:03 -- common/autotest_common.sh@10 -- # set +x 00:12:33.250 10:05:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:33.250 10:05:03 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.250 10:05:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:33.250 10:05:03 -- common/autotest_common.sh@10 -- # set +x 00:12:33.250 [2024-07-14 10:05:03.708620] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.250 10:05:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:33.250 10:05:03 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:33.250 10:05:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:33.250 10:05:03 -- common/autotest_common.sh@10 -- # set +x 00:12:33.250 10:05:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:33.250 10:05:03 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:33.250 10:05:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:33.250 10:05:03 -- common/autotest_common.sh@10 -- # set +x 00:12:33.250 10:05:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:33.250 10:05:03 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:33.508 10:05:03 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:33.508 10:05:03 -- common/autotest_common.sh@1177 -- # local i=0 00:12:33.508 10:05:03 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.508 10:05:03 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:33.508 10:05:03 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:35.406 10:05:05 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:35.406 10:05:05 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.406 10:05:05 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:35.406 10:05:05 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:35.406 10:05:05 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.406 10:05:05 -- common/autotest_common.sh@1187 -- # return 0 00:12:35.406 10:05:05 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:35.406 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.664 10:05:05 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:35.664 10:05:05 -- common/autotest_common.sh@1198 -- # local i=0 00:12:35.664 10:05:05 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:35.664 10:05:05 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.664 10:05:05 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.664 10:05:05 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:35.664 10:05:05 -- common/autotest_common.sh@1210 -- # return 0 00:12:35.664 10:05:05 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:35.664 10:05:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:35.664 10:05:05 -- common/autotest_common.sh@10 -- # set +x 00:12:35.664 10:05:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:35.664 10:05:05 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:35.664 10:05:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:35.664 10:05:05 -- common/autotest_common.sh@10 -- # set +x 00:12:35.664 10:05:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:35.664 10:05:05 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:35.664 10:05:05 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:35.664 10:05:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:35.664 10:05:05 -- common/autotest_common.sh@10 -- # set +x 00:12:35.664 10:05:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:35.664 10:05:06 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:35.664 10:05:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:35.664 10:05:06 -- common/autotest_common.sh@10 -- # set +x 00:12:35.664 [2024-07-14 10:05:06.011623] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:35.664 10:05:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:35.664 10:05:06 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:35.664 10:05:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:35.664 10:05:06 -- common/autotest_common.sh@10 -- # set +x 00:12:35.664 10:05:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:35.664 10:05:06 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:35.664 10:05:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:35.664 10:05:06 -- common/autotest_common.sh@10 -- # set +x 00:12:35.664 10:05:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:35.664 10:05:06 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.664 10:05:06 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:35.664 10:05:06 -- common/autotest_common.sh@1177 -- # local i=0 00:12:35.664 10:05:06 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:35.664 10:05:06 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:35.664 10:05:06 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:38.193 10:05:08 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:38.193 10:05:08 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:38.193 10:05:08 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:38.193 10:05:08 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:38.193 10:05:08 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:38.193 10:05:08 -- common/autotest_common.sh@1187 -- # return 0 00:12:38.193 10:05:08 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:38.193 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.193 10:05:08 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:38.193 10:05:08 -- common/autotest_common.sh@1198 -- # local i=0 00:12:38.193 10:05:08 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.193 10:05:08 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:38.193 10:05:08 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.193 10:05:08 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:38.193 10:05:08 -- common/autotest_common.sh@1210 -- # return 0 00:12:38.193 10:05:08 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:38.193 10:05:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:38.193 10:05:08 -- common/autotest_common.sh@10 -- # set +x 00:12:38.193 10:05:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:38.193 10:05:08 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:38.193 10:05:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:38.193 10:05:08 -- common/autotest_common.sh@10 -- # set +x 00:12:38.193 10:05:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:38.193 10:05:08 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:38.193 10:05:08 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:38.193 10:05:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:38.193 10:05:08 -- common/autotest_common.sh@10 -- # set +x 00:12:38.193 10:05:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:38.193 10:05:08 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.193 10:05:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:38.193 10:05:08 -- common/autotest_common.sh@10 -- # set +x 00:12:38.193 [2024-07-14 10:05:08.330693] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.193 10:05:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:38.193 10:05:08 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:38.193 10:05:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:38.193 10:05:08 -- common/autotest_common.sh@10 -- # set +x 00:12:38.193 10:05:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:38.193 10:05:08 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:38.193 10:05:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:38.193 10:05:08 -- common/autotest_common.sh@10 -- # set +x 00:12:38.193 10:05:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:38.193 10:05:08 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.193 10:05:08 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.193 10:05:08 -- common/autotest_common.sh@1177 -- # local i=0 00:12:38.193 10:05:08 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.193 10:05:08 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:38.193 10:05:08 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:40.096 10:05:10 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:40.096 10:05:10 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:40.096 10:05:10 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.096 10:05:10 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:40.096 10:05:10 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.096 10:05:10 -- common/autotest_common.sh@1187 -- # return 0 00:12:40.096 10:05:10 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.356 10:05:10 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:40.356 10:05:10 -- common/autotest_common.sh@1198 -- # local i=0 00:12:40.356 10:05:10 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:40.356 10:05:10 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.356 10:05:10 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:40.356 10:05:10 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.356 10:05:10 -- common/autotest_common.sh@1210 -- # return 0 00:12:40.356 10:05:10 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:40.356 10:05:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.356 10:05:10 -- common/autotest_common.sh@10 -- # set +x 00:12:40.356 10:05:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.356 10:05:10 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.356 10:05:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.356 10:05:10 -- common/autotest_common.sh@10 -- # set +x 00:12:40.356 10:05:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.356 10:05:10 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:40.356 10:05:10 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.356 10:05:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.356 10:05:10 -- common/autotest_common.sh@10 -- # set +x 00:12:40.356 10:05:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.356 10:05:10 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.356 10:05:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.356 10:05:10 -- common/autotest_common.sh@10 -- # set +x 00:12:40.356 [2024-07-14 10:05:10.725707] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.356 10:05:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.356 10:05:10 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:40.356 10:05:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.356 10:05:10 -- common/autotest_common.sh@10 -- # set +x 00:12:40.356 10:05:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.356 10:05:10 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.356 10:05:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.356 10:05:10 -- common/autotest_common.sh@10 -- # set +x 00:12:40.356 10:05:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.356 10:05:10 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:40.615 10:05:10 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:40.615 10:05:10 -- common/autotest_common.sh@1177 -- # local i=0 00:12:40.615 10:05:10 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:40.615 10:05:10 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:40.615 10:05:10 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:42.520 10:05:12 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:42.520 10:05:12 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:12:42.520 10:05:12 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:42.520 10:05:12 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:42.520 10:05:12 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:42.520 10:05:12 -- common/autotest_common.sh@1187 -- # return 0 00:12:42.520 10:05:12 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:42.520 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:42.520 10:05:13 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:42.520 10:05:13 -- common/autotest_common.sh@1198 -- # local i=0 00:12:42.520 10:05:13 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:12:42.520 10:05:13 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.854 10:05:13 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:12:42.854 10:05:13 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:42.854 10:05:13 -- common/autotest_common.sh@1210 -- # return 0 00:12:42.854 10:05:13 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@99 -- # seq 1 5 00:12:42.854 10:05:13 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:42.854 10:05:13 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 [2024-07-14 10:05:13.112765] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:42.854 10:05:13 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 [2024-07-14 10:05:13.160720] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:42.854 10:05:13 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 [2024-07-14 10:05:13.208806] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:42.854 10:05:13 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 [2024-07-14 10:05:13.256845] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:42.854 10:05:13 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 [2024-07-14 10:05:13.304888] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:42.854 10:05:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:42.854 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:42.854 10:05:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:42.854 10:05:13 -- target/rpc.sh@110 -- # stats='{ 00:12:42.854 "poll_groups": [ 00:12:42.854 { 00:12:42.854 "admin_qpairs": 2, 00:12:42.855 "completed_nvme_io": 66, 00:12:42.855 "current_admin_qpairs": 0, 00:12:42.855 "current_io_qpairs": 0, 00:12:42.855 "io_qpairs": 16, 00:12:42.855 "name": "nvmf_tgt_poll_group_0", 00:12:42.855 "pending_bdev_io": 0, 00:12:42.855 "transports": [ 00:12:42.855 { 00:12:42.855 "trtype": "TCP" 00:12:42.855 } 00:12:42.855 ] 00:12:42.855 }, 00:12:42.855 { 00:12:42.855 "admin_qpairs": 3, 00:12:42.855 "completed_nvme_io": 69, 00:12:42.855 "current_admin_qpairs": 0, 00:12:42.855 "current_io_qpairs": 0, 00:12:42.855 "io_qpairs": 17, 00:12:42.855 "name": "nvmf_tgt_poll_group_1", 00:12:42.855 "pending_bdev_io": 0, 00:12:42.855 "transports": [ 00:12:42.855 { 00:12:42.855 "trtype": "TCP" 00:12:42.855 } 00:12:42.855 ] 00:12:42.855 }, 00:12:42.855 { 00:12:42.855 "admin_qpairs": 1, 00:12:42.855 "completed_nvme_io": 119, 00:12:42.855 "current_admin_qpairs": 0, 00:12:42.855 "current_io_qpairs": 0, 00:12:42.855 "io_qpairs": 19, 00:12:42.855 "name": "nvmf_tgt_poll_group_2", 00:12:42.855 "pending_bdev_io": 0, 00:12:42.855 "transports": [ 00:12:42.855 { 00:12:42.855 "trtype": "TCP" 00:12:42.855 } 00:12:42.855 ] 00:12:42.855 }, 00:12:42.855 { 00:12:42.855 "admin_qpairs": 1, 00:12:42.855 "completed_nvme_io": 166, 00:12:42.855 "current_admin_qpairs": 0, 00:12:42.855 "current_io_qpairs": 0, 00:12:42.855 "io_qpairs": 18, 00:12:42.855 "name": "nvmf_tgt_poll_group_3", 00:12:42.855 "pending_bdev_io": 0, 00:12:42.855 "transports": [ 00:12:42.855 { 00:12:42.855 "trtype": "TCP" 00:12:42.855 } 00:12:42.855 ] 00:12:42.855 } 00:12:42.855 ], 00:12:42.855 "tick_rate": 2200000000 00:12:42.855 }' 00:12:42.855 10:05:13 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:42.855 10:05:13 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:42.855 10:05:13 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:42.855 10:05:13 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:43.113 10:05:13 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:43.113 10:05:13 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:43.113 10:05:13 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:43.113 10:05:13 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:43.113 10:05:13 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:43.113 10:05:13 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:12:43.113 10:05:13 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:43.113 10:05:13 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:43.113 10:05:13 -- target/rpc.sh@123 -- # nvmftestfini 00:12:43.113 10:05:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:43.113 10:05:13 -- nvmf/common.sh@116 -- # sync 00:12:43.113 10:05:13 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:43.113 10:05:13 -- nvmf/common.sh@119 -- # set +e 00:12:43.113 10:05:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:43.113 10:05:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:43.113 rmmod nvme_tcp 00:12:43.113 rmmod nvme_fabrics 00:12:43.113 rmmod nvme_keyring 00:12:43.113 10:05:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:43.113 10:05:13 -- nvmf/common.sh@123 -- # set -e 00:12:43.113 10:05:13 -- nvmf/common.sh@124 -- # return 0 00:12:43.113 10:05:13 -- nvmf/common.sh@477 -- # '[' -n 77668 ']' 00:12:43.113 10:05:13 -- nvmf/common.sh@478 -- # killprocess 77668 00:12:43.113 10:05:13 -- common/autotest_common.sh@926 -- # '[' -z 77668 ']' 00:12:43.113 10:05:13 -- common/autotest_common.sh@930 -- # kill -0 77668 00:12:43.113 10:05:13 -- common/autotest_common.sh@931 -- # uname 00:12:43.113 10:05:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:43.113 10:05:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77668 00:12:43.113 killing process with pid 77668 00:12:43.113 10:05:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:43.113 10:05:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:43.113 10:05:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77668' 00:12:43.113 10:05:13 -- common/autotest_common.sh@945 -- # kill 77668 00:12:43.113 10:05:13 -- common/autotest_common.sh@950 -- # wait 77668 00:12:43.370 10:05:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:43.370 10:05:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:43.370 10:05:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:43.370 10:05:13 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:43.370 10:05:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:43.370 10:05:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.370 10:05:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:43.370 10:05:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.370 10:05:13 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:43.370 00:12:43.370 real 0m18.959s 00:12:43.370 user 1m11.438s 00:12:43.370 sys 0m2.568s 00:12:43.370 ************************************ 00:12:43.370 END TEST nvmf_rpc 00:12:43.370 ************************************ 00:12:43.370 10:05:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:43.370 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:43.370 10:05:13 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:43.370 10:05:13 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:43.370 10:05:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:43.370 10:05:13 -- common/autotest_common.sh@10 -- # set +x 00:12:43.370 ************************************ 00:12:43.370 START TEST nvmf_invalid 00:12:43.370 ************************************ 00:12:43.370 10:05:13 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:43.629 * Looking for test storage... 00:12:43.629 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:43.629 10:05:13 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:43.629 10:05:13 -- nvmf/common.sh@7 -- # uname -s 00:12:43.629 10:05:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:43.629 10:05:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:43.629 10:05:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:43.629 10:05:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:43.629 10:05:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:43.629 10:05:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:43.629 10:05:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:43.629 10:05:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:43.629 10:05:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:43.629 10:05:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:43.629 10:05:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:12:43.629 10:05:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:12:43.629 10:05:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:43.629 10:05:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:43.629 10:05:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:43.629 10:05:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:43.629 10:05:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:43.629 10:05:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:43.629 10:05:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:43.629 10:05:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.629 10:05:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.629 10:05:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.629 10:05:13 -- paths/export.sh@5 -- # export PATH 00:12:43.629 10:05:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:43.629 10:05:13 -- nvmf/common.sh@46 -- # : 0 00:12:43.629 10:05:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:43.629 10:05:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:43.629 10:05:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:43.629 10:05:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:43.629 10:05:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:43.629 10:05:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:43.629 10:05:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:43.629 10:05:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:43.629 10:05:13 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:12:43.629 10:05:13 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:43.629 10:05:13 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:43.629 10:05:13 -- target/invalid.sh@14 -- # target=foobar 00:12:43.629 10:05:13 -- target/invalid.sh@16 -- # RANDOM=0 00:12:43.629 10:05:13 -- target/invalid.sh@34 -- # nvmftestinit 00:12:43.629 10:05:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:43.629 10:05:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:43.629 10:05:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:43.629 10:05:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:43.629 10:05:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:43.629 10:05:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.629 10:05:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:43.629 10:05:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:43.629 10:05:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:43.629 10:05:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:43.629 10:05:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:43.629 10:05:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:43.629 10:05:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:43.629 10:05:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:43.629 10:05:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:43.629 10:05:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:43.629 10:05:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:43.629 10:05:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:43.629 10:05:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:43.629 10:05:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:43.629 10:05:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:43.629 10:05:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:43.629 10:05:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:43.629 10:05:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:43.629 10:05:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:43.629 10:05:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:43.629 10:05:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:43.629 10:05:13 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:43.629 Cannot find device "nvmf_tgt_br" 00:12:43.629 10:05:14 -- nvmf/common.sh@154 -- # true 00:12:43.629 10:05:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:43.629 Cannot find device "nvmf_tgt_br2" 00:12:43.629 10:05:14 -- nvmf/common.sh@155 -- # true 00:12:43.629 10:05:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:43.629 10:05:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:43.629 Cannot find device "nvmf_tgt_br" 00:12:43.629 10:05:14 -- nvmf/common.sh@157 -- # true 00:12:43.629 10:05:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:43.629 Cannot find device "nvmf_tgt_br2" 00:12:43.629 10:05:14 -- nvmf/common.sh@158 -- # true 00:12:43.629 10:05:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:43.629 10:05:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:43.629 10:05:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:43.629 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:43.629 10:05:14 -- nvmf/common.sh@161 -- # true 00:12:43.629 10:05:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:43.629 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:43.629 10:05:14 -- nvmf/common.sh@162 -- # true 00:12:43.629 10:05:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:43.629 10:05:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:43.629 10:05:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:43.629 10:05:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:43.629 10:05:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:43.629 10:05:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:43.629 10:05:14 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:43.630 10:05:14 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:43.888 10:05:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:43.888 10:05:14 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:43.888 10:05:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:43.888 10:05:14 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:43.888 10:05:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:43.888 10:05:14 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:43.888 10:05:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:43.888 10:05:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:43.888 10:05:14 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:43.888 10:05:14 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:43.888 10:05:14 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:43.888 10:05:14 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:43.888 10:05:14 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:43.888 10:05:14 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:43.888 10:05:14 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:43.888 10:05:14 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:43.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:43.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:12:43.888 00:12:43.888 --- 10.0.0.2 ping statistics --- 00:12:43.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.888 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:12:43.888 10:05:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:43.888 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:43.888 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:12:43.888 00:12:43.888 --- 10.0.0.3 ping statistics --- 00:12:43.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.888 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:12:43.888 10:05:14 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:43.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:43.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:12:43.888 00:12:43.888 --- 10.0.0.1 ping statistics --- 00:12:43.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:43.888 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:12:43.888 10:05:14 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:43.888 10:05:14 -- nvmf/common.sh@421 -- # return 0 00:12:43.888 10:05:14 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:43.888 10:05:14 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:43.888 10:05:14 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:43.888 10:05:14 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:43.888 10:05:14 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:43.888 10:05:14 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:43.888 10:05:14 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:43.888 10:05:14 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:43.888 10:05:14 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:43.888 10:05:14 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:43.888 10:05:14 -- common/autotest_common.sh@10 -- # set +x 00:12:43.888 10:05:14 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:43.888 10:05:14 -- nvmf/common.sh@469 -- # nvmfpid=78178 00:12:43.888 10:05:14 -- nvmf/common.sh@470 -- # waitforlisten 78178 00:12:43.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:43.888 10:05:14 -- common/autotest_common.sh@819 -- # '[' -z 78178 ']' 00:12:43.888 10:05:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:43.888 10:05:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:43.888 10:05:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:43.888 10:05:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:43.888 10:05:14 -- common/autotest_common.sh@10 -- # set +x 00:12:43.888 [2024-07-14 10:05:14.341595] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:43.888 [2024-07-14 10:05:14.341844] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:44.147 [2024-07-14 10:05:14.474029] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:44.147 [2024-07-14 10:05:14.532364] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:44.147 [2024-07-14 10:05:14.532808] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:44.147 [2024-07-14 10:05:14.532861] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:44.147 [2024-07-14 10:05:14.532874] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:44.147 [2024-07-14 10:05:14.533014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:44.147 [2024-07-14 10:05:14.533226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:44.147 [2024-07-14 10:05:14.533673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:44.147 [2024-07-14 10:05:14.533718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.081 10:05:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:45.081 10:05:15 -- common/autotest_common.sh@852 -- # return 0 00:12:45.081 10:05:15 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:45.081 10:05:15 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:45.081 10:05:15 -- common/autotest_common.sh@10 -- # set +x 00:12:45.081 10:05:15 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:45.081 10:05:15 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:45.081 10:05:15 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode12138 00:12:45.081 [2024-07-14 10:05:15.614328] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:45.339 10:05:15 -- target/invalid.sh@40 -- # out='2024/07/14 10:05:15 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode12138 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:45.339 request: 00:12:45.339 { 00:12:45.339 "method": "nvmf_create_subsystem", 00:12:45.339 "params": { 00:12:45.339 "nqn": "nqn.2016-06.io.spdk:cnode12138", 00:12:45.339 "tgt_name": "foobar" 00:12:45.339 } 00:12:45.339 } 00:12:45.339 Got JSON-RPC error response 00:12:45.339 GoRPCClient: error on JSON-RPC call' 00:12:45.339 10:05:15 -- target/invalid.sh@41 -- # [[ 2024/07/14 10:05:15 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode12138 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:12:45.339 request: 00:12:45.339 { 00:12:45.339 "method": "nvmf_create_subsystem", 00:12:45.339 "params": { 00:12:45.339 "nqn": "nqn.2016-06.io.spdk:cnode12138", 00:12:45.339 "tgt_name": "foobar" 00:12:45.339 } 00:12:45.339 } 00:12:45.339 Got JSON-RPC error response 00:12:45.339 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:45.339 10:05:15 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:45.339 10:05:15 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode29005 00:12:45.339 [2024-07-14 10:05:15.882644] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29005: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:45.596 10:05:15 -- target/invalid.sh@45 -- # out='2024/07/14 10:05:15 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode29005 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:45.596 request: 00:12:45.596 { 00:12:45.596 "method": "nvmf_create_subsystem", 00:12:45.596 "params": { 00:12:45.596 "nqn": "nqn.2016-06.io.spdk:cnode29005", 00:12:45.596 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:45.596 } 00:12:45.596 } 00:12:45.596 Got JSON-RPC error response 00:12:45.596 GoRPCClient: error on JSON-RPC call' 00:12:45.596 10:05:15 -- target/invalid.sh@46 -- # [[ 2024/07/14 10:05:15 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode29005 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:12:45.596 request: 00:12:45.596 { 00:12:45.596 "method": "nvmf_create_subsystem", 00:12:45.596 "params": { 00:12:45.596 "nqn": "nqn.2016-06.io.spdk:cnode29005", 00:12:45.596 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:12:45.596 } 00:12:45.596 } 00:12:45.596 Got JSON-RPC error response 00:12:45.596 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:45.596 10:05:15 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:45.596 10:05:15 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode27302 00:12:45.854 [2024-07-14 10:05:16.146885] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27302: invalid model number 'SPDK_Controller' 00:12:45.855 10:05:16 -- target/invalid.sh@50 -- # out='2024/07/14 10:05:16 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode27302], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:45.855 request: 00:12:45.855 { 00:12:45.855 "method": "nvmf_create_subsystem", 00:12:45.855 "params": { 00:12:45.855 "nqn": "nqn.2016-06.io.spdk:cnode27302", 00:12:45.855 "model_number": "SPDK_Controller\u001f" 00:12:45.855 } 00:12:45.855 } 00:12:45.855 Got JSON-RPC error response 00:12:45.855 GoRPCClient: error on JSON-RPC call' 00:12:45.855 10:05:16 -- target/invalid.sh@51 -- # [[ 2024/07/14 10:05:16 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode27302], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:12:45.855 request: 00:12:45.855 { 00:12:45.855 "method": "nvmf_create_subsystem", 00:12:45.855 "params": { 00:12:45.855 "nqn": "nqn.2016-06.io.spdk:cnode27302", 00:12:45.855 "model_number": "SPDK_Controller\u001f" 00:12:45.855 } 00:12:45.855 } 00:12:45.855 Got JSON-RPC error response 00:12:45.855 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:45.855 10:05:16 -- target/invalid.sh@54 -- # gen_random_s 21 00:12:45.855 10:05:16 -- target/invalid.sh@19 -- # local length=21 ll 00:12:45.855 10:05:16 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:45.855 10:05:16 -- target/invalid.sh@21 -- # local chars 00:12:45.855 10:05:16 -- target/invalid.sh@22 -- # local string 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # printf %x 37 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # string+=% 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # printf %x 75 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # string+=K 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # printf %x 124 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # string+='|' 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # printf %x 103 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # string+=g 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # printf %x 124 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x7c' 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # string+='|' 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # printf %x 111 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # string+=o 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # printf %x 93 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # string+=']' 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # printf %x 125 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # string+='}' 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # printf %x 50 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # string+=2 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # printf %x 66 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # string+=B 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # printf %x 108 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # string+=l 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # printf %x 112 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # string+=p 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # printf %x 63 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # string+='?' 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # printf %x 64 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # string+=@ 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # printf %x 102 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x66' 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # string+=f 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # printf %x 82 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # string+=R 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # printf %x 76 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # string+=L 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # printf %x 90 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # string+=Z 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # printf %x 89 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # string+=Y 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # printf %x 125 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # string+='}' 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # printf %x 37 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:45.855 10:05:16 -- target/invalid.sh@25 -- # string+=% 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:45.855 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:45.855 10:05:16 -- target/invalid.sh@28 -- # [[ % == \- ]] 00:12:45.855 10:05:16 -- target/invalid.sh@31 -- # echo '%K|g|o]}2Blp?@fRLZY}%' 00:12:45.855 10:05:16 -- target/invalid.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s '%K|g|o]}2Blp?@fRLZY}%' nqn.2016-06.io.spdk:cnode14727 00:12:46.114 [2024-07-14 10:05:16.451092] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14727: invalid serial number '%K|g|o]}2Blp?@fRLZY}%' 00:12:46.114 10:05:16 -- target/invalid.sh@54 -- # out='2024/07/14 10:05:16 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode14727 serial_number:%K|g|o]}2Blp?@fRLZY}%], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN %K|g|o]}2Blp?@fRLZY}% 00:12:46.114 request: 00:12:46.114 { 00:12:46.114 "method": "nvmf_create_subsystem", 00:12:46.114 "params": { 00:12:46.114 "nqn": "nqn.2016-06.io.spdk:cnode14727", 00:12:46.114 "serial_number": "%K|g|o]}2Blp?@fRLZY}%" 00:12:46.114 } 00:12:46.114 } 00:12:46.114 Got JSON-RPC error response 00:12:46.114 GoRPCClient: error on JSON-RPC call' 00:12:46.114 10:05:16 -- target/invalid.sh@55 -- # [[ 2024/07/14 10:05:16 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode14727 serial_number:%K|g|o]}2Blp?@fRLZY}%], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN %K|g|o]}2Blp?@fRLZY}% 00:12:46.114 request: 00:12:46.114 { 00:12:46.114 "method": "nvmf_create_subsystem", 00:12:46.114 "params": { 00:12:46.114 "nqn": "nqn.2016-06.io.spdk:cnode14727", 00:12:46.114 "serial_number": "%K|g|o]}2Blp?@fRLZY}%" 00:12:46.114 } 00:12:46.114 } 00:12:46.114 Got JSON-RPC error response 00:12:46.114 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:46.114 10:05:16 -- target/invalid.sh@58 -- # gen_random_s 41 00:12:46.114 10:05:16 -- target/invalid.sh@19 -- # local length=41 ll 00:12:46.114 10:05:16 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:46.114 10:05:16 -- target/invalid.sh@21 -- # local chars 00:12:46.114 10:05:16 -- target/invalid.sh@22 -- # local string 00:12:46.114 10:05:16 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:46.114 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # printf %x 57 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # string+=9 00:12:46.114 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.114 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # printf %x 111 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # string+=o 00:12:46.114 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.114 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # printf %x 121 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # string+=y 00:12:46.114 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.114 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # printf %x 111 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # string+=o 00:12:46.114 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.114 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # printf %x 80 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # string+=P 00:12:46.114 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.114 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # printf %x 113 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # string+=q 00:12:46.114 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.114 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # printf %x 81 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # string+=Q 00:12:46.114 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.114 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # printf %x 116 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # string+=t 00:12:46.114 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.114 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # printf %x 99 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # string+=c 00:12:46.114 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.114 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # printf %x 109 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # string+=m 00:12:46.114 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.114 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # printf %x 99 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # string+=c 00:12:46.114 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.114 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # printf %x 59 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:46.114 10:05:16 -- target/invalid.sh@25 -- # string+=';' 00:12:46.114 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # printf %x 84 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # string+=T 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # printf %x 101 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # string+=e 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # printf %x 50 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # string+=2 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # printf %x 42 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # string+='*' 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # printf %x 79 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # string+=O 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # printf %x 66 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # string+=B 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # printf %x 97 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # string+=a 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # printf %x 40 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # string+='(' 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # printf %x 50 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x32' 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # string+=2 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # printf %x 92 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # string+='\' 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # printf %x 109 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # string+=m 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # printf %x 90 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # string+=Z 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # printf %x 51 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x33' 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # string+=3 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # printf %x 45 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # string+=- 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # printf %x 114 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # string+=r 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # printf %x 99 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # string+=c 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # printf %x 86 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # string+=V 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # printf %x 60 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # string+='<' 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # printf %x 61 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # string+== 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # printf %x 43 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # string+=+ 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # printf %x 67 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # string+=C 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # printf %x 53 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # string+=5 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # printf %x 69 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # string+=E 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # printf %x 46 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # string+=. 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # printf %x 123 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # string+='{' 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # printf %x 72 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # string+=H 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # printf %x 39 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x27' 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # string+=\' 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # printf %x 80 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x50' 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # string+=P 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # printf %x 89 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:46.115 10:05:16 -- target/invalid.sh@25 -- # string+=Y 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll++ )) 00:12:46.115 10:05:16 -- target/invalid.sh@24 -- # (( ll < length )) 00:12:46.115 10:05:16 -- target/invalid.sh@28 -- # [[ 9 == \- ]] 00:12:46.115 10:05:16 -- target/invalid.sh@31 -- # echo '9oyoPqQtcmc;Te2*OBa(2\mZ3-rcV<=+C5E.{H'\''PY' 00:12:46.115 10:05:16 -- target/invalid.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d '9oyoPqQtcmc;Te2*OBa(2\mZ3-rcV<=+C5E.{H'\''PY' nqn.2016-06.io.spdk:cnode22699 00:12:46.374 [2024-07-14 10:05:16.823408] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22699: invalid model number '9oyoPqQtcmc;Te2*OBa(2\mZ3-rcV<=+C5E.{H'PY' 00:12:46.374 10:05:16 -- target/invalid.sh@58 -- # out='2024/07/14 10:05:16 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:9oyoPqQtcmc;Te2*OBa(2\mZ3-rcV<=+C5E.{H'\''PY nqn:nqn.2016-06.io.spdk:cnode22699], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN 9oyoPqQtcmc;Te2*OBa(2\mZ3-rcV<=+C5E.{H'\''PY 00:12:46.374 request: 00:12:46.374 { 00:12:46.374 "method": "nvmf_create_subsystem", 00:12:46.374 "params": { 00:12:46.374 "nqn": "nqn.2016-06.io.spdk:cnode22699", 00:12:46.374 "model_number": "9oyoPqQtcmc;Te2*OBa(2\\mZ3-rcV<=+C5E.{H'\''PY" 00:12:46.374 } 00:12:46.374 } 00:12:46.374 Got JSON-RPC error response 00:12:46.374 GoRPCClient: error on JSON-RPC call' 00:12:46.374 10:05:16 -- target/invalid.sh@59 -- # [[ 2024/07/14 10:05:16 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:9oyoPqQtcmc;Te2*OBa(2\mZ3-rcV<=+C5E.{H'PY nqn:nqn.2016-06.io.spdk:cnode22699], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN 9oyoPqQtcmc;Te2*OBa(2\mZ3-rcV<=+C5E.{H'PY 00:12:46.374 request: 00:12:46.374 { 00:12:46.374 "method": "nvmf_create_subsystem", 00:12:46.374 "params": { 00:12:46.374 "nqn": "nqn.2016-06.io.spdk:cnode22699", 00:12:46.374 "model_number": "9oyoPqQtcmc;Te2*OBa(2\\mZ3-rcV<=+C5E.{H'PY" 00:12:46.374 } 00:12:46.374 } 00:12:46.374 Got JSON-RPC error response 00:12:46.374 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:46.374 10:05:16 -- target/invalid.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:46.632 [2024-07-14 10:05:17.095751] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:46.632 10:05:17 -- target/invalid.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:46.890 10:05:17 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:46.890 10:05:17 -- target/invalid.sh@67 -- # head -n 1 00:12:46.890 10:05:17 -- target/invalid.sh@67 -- # echo '' 00:12:46.890 10:05:17 -- target/invalid.sh@67 -- # IP= 00:12:46.890 10:05:17 -- target/invalid.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:47.148 [2024-07-14 10:05:17.633534] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:47.148 10:05:17 -- target/invalid.sh@69 -- # out='2024/07/14 10:05:17 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:47.148 request: 00:12:47.148 { 00:12:47.148 "method": "nvmf_subsystem_remove_listener", 00:12:47.148 "params": { 00:12:47.148 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:47.149 "listen_address": { 00:12:47.149 "trtype": "tcp", 00:12:47.149 "traddr": "", 00:12:47.149 "trsvcid": "4421" 00:12:47.149 } 00:12:47.149 } 00:12:47.149 } 00:12:47.149 Got JSON-RPC error response 00:12:47.149 GoRPCClient: error on JSON-RPC call' 00:12:47.149 10:05:17 -- target/invalid.sh@70 -- # [[ 2024/07/14 10:05:17 error on JSON-RPC call, method: nvmf_subsystem_remove_listener, params: map[listen_address:map[traddr: trsvcid:4421 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode], err: error received for nvmf_subsystem_remove_listener method, err: Code=-32602 Msg=Invalid parameters 00:12:47.149 request: 00:12:47.149 { 00:12:47.149 "method": "nvmf_subsystem_remove_listener", 00:12:47.149 "params": { 00:12:47.149 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:47.149 "listen_address": { 00:12:47.149 "trtype": "tcp", 00:12:47.149 "traddr": "", 00:12:47.149 "trsvcid": "4421" 00:12:47.149 } 00:12:47.149 } 00:12:47.149 } 00:12:47.149 Got JSON-RPC error response 00:12:47.149 GoRPCClient: error on JSON-RPC call != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:47.149 10:05:17 -- target/invalid.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode22816 -i 0 00:12:47.407 [2024-07-14 10:05:17.849734] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22816: invalid cntlid range [0-65519] 00:12:47.407 10:05:17 -- target/invalid.sh@73 -- # out='2024/07/14 10:05:17 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode22816], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:47.407 request: 00:12:47.407 { 00:12:47.407 "method": "nvmf_create_subsystem", 00:12:47.407 "params": { 00:12:47.407 "nqn": "nqn.2016-06.io.spdk:cnode22816", 00:12:47.407 "min_cntlid": 0 00:12:47.407 } 00:12:47.407 } 00:12:47.407 Got JSON-RPC error response 00:12:47.407 GoRPCClient: error on JSON-RPC call' 00:12:47.407 10:05:17 -- target/invalid.sh@74 -- # [[ 2024/07/14 10:05:17 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode22816], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [0-65519] 00:12:47.407 request: 00:12:47.407 { 00:12:47.407 "method": "nvmf_create_subsystem", 00:12:47.407 "params": { 00:12:47.407 "nqn": "nqn.2016-06.io.spdk:cnode22816", 00:12:47.407 "min_cntlid": 0 00:12:47.407 } 00:12:47.407 } 00:12:47.407 Got JSON-RPC error response 00:12:47.407 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:47.407 10:05:17 -- target/invalid.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode30114 -i 65520 00:12:47.666 [2024-07-14 10:05:18.109913] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30114: invalid cntlid range [65520-65519] 00:12:47.666 10:05:18 -- target/invalid.sh@75 -- # out='2024/07/14 10:05:18 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode30114], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:47.666 request: 00:12:47.666 { 00:12:47.666 "method": "nvmf_create_subsystem", 00:12:47.666 "params": { 00:12:47.666 "nqn": "nqn.2016-06.io.spdk:cnode30114", 00:12:47.666 "min_cntlid": 65520 00:12:47.666 } 00:12:47.666 } 00:12:47.666 Got JSON-RPC error response 00:12:47.666 GoRPCClient: error on JSON-RPC call' 00:12:47.666 10:05:18 -- target/invalid.sh@76 -- # [[ 2024/07/14 10:05:18 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[min_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode30114], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [65520-65519] 00:12:47.666 request: 00:12:47.666 { 00:12:47.666 "method": "nvmf_create_subsystem", 00:12:47.666 "params": { 00:12:47.666 "nqn": "nqn.2016-06.io.spdk:cnode30114", 00:12:47.666 "min_cntlid": 65520 00:12:47.666 } 00:12:47.666 } 00:12:47.666 Got JSON-RPC error response 00:12:47.666 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:47.666 10:05:18 -- target/invalid.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3157 -I 0 00:12:47.925 [2024-07-14 10:05:18.366132] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode3157: invalid cntlid range [1-0] 00:12:47.925 10:05:18 -- target/invalid.sh@77 -- # out='2024/07/14 10:05:18 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode3157], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:47.925 request: 00:12:47.925 { 00:12:47.925 "method": "nvmf_create_subsystem", 00:12:47.925 "params": { 00:12:47.925 "nqn": "nqn.2016-06.io.spdk:cnode3157", 00:12:47.925 "max_cntlid": 0 00:12:47.925 } 00:12:47.925 } 00:12:47.925 Got JSON-RPC error response 00:12:47.925 GoRPCClient: error on JSON-RPC call' 00:12:47.925 10:05:18 -- target/invalid.sh@78 -- # [[ 2024/07/14 10:05:18 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:0 nqn:nqn.2016-06.io.spdk:cnode3157], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-0] 00:12:47.925 request: 00:12:47.925 { 00:12:47.925 "method": "nvmf_create_subsystem", 00:12:47.925 "params": { 00:12:47.925 "nqn": "nqn.2016-06.io.spdk:cnode3157", 00:12:47.925 "max_cntlid": 0 00:12:47.925 } 00:12:47.925 } 00:12:47.925 Got JSON-RPC error response 00:12:47.925 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:47.925 10:05:18 -- target/invalid.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7049 -I 65520 00:12:48.183 [2024-07-14 10:05:18.634421] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7049: invalid cntlid range [1-65520] 00:12:48.183 10:05:18 -- target/invalid.sh@79 -- # out='2024/07/14 10:05:18 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode7049], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:48.183 request: 00:12:48.183 { 00:12:48.183 "method": "nvmf_create_subsystem", 00:12:48.183 "params": { 00:12:48.183 "nqn": "nqn.2016-06.io.spdk:cnode7049", 00:12:48.183 "max_cntlid": 65520 00:12:48.183 } 00:12:48.183 } 00:12:48.183 Got JSON-RPC error response 00:12:48.183 GoRPCClient: error on JSON-RPC call' 00:12:48.183 10:05:18 -- target/invalid.sh@80 -- # [[ 2024/07/14 10:05:18 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:65520 nqn:nqn.2016-06.io.spdk:cnode7049], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [1-65520] 00:12:48.183 request: 00:12:48.183 { 00:12:48.183 "method": "nvmf_create_subsystem", 00:12:48.183 "params": { 00:12:48.183 "nqn": "nqn.2016-06.io.spdk:cnode7049", 00:12:48.183 "max_cntlid": 65520 00:12:48.183 } 00:12:48.183 } 00:12:48.183 Got JSON-RPC error response 00:12:48.183 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:48.183 10:05:18 -- target/invalid.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode18573 -i 6 -I 5 00:12:48.442 [2024-07-14 10:05:18.846621] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18573: invalid cntlid range [6-5] 00:12:48.442 10:05:18 -- target/invalid.sh@83 -- # out='2024/07/14 10:05:18 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode18573], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:48.442 request: 00:12:48.442 { 00:12:48.442 "method": "nvmf_create_subsystem", 00:12:48.442 "params": { 00:12:48.442 "nqn": "nqn.2016-06.io.spdk:cnode18573", 00:12:48.442 "min_cntlid": 6, 00:12:48.442 "max_cntlid": 5 00:12:48.442 } 00:12:48.442 } 00:12:48.442 Got JSON-RPC error response 00:12:48.442 GoRPCClient: error on JSON-RPC call' 00:12:48.442 10:05:18 -- target/invalid.sh@84 -- # [[ 2024/07/14 10:05:18 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[max_cntlid:5 min_cntlid:6 nqn:nqn.2016-06.io.spdk:cnode18573], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid cntlid range [6-5] 00:12:48.442 request: 00:12:48.442 { 00:12:48.442 "method": "nvmf_create_subsystem", 00:12:48.442 "params": { 00:12:48.442 "nqn": "nqn.2016-06.io.spdk:cnode18573", 00:12:48.442 "min_cntlid": 6, 00:12:48.442 "max_cntlid": 5 00:12:48.442 } 00:12:48.442 } 00:12:48.442 Got JSON-RPC error response 00:12:48.442 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:48.442 10:05:18 -- target/invalid.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:48.442 10:05:18 -- target/invalid.sh@87 -- # out='request: 00:12:48.442 { 00:12:48.442 "name": "foobar", 00:12:48.442 "method": "nvmf_delete_target", 00:12:48.442 "req_id": 1 00:12:48.442 } 00:12:48.442 Got JSON-RPC error response 00:12:48.442 response: 00:12:48.442 { 00:12:48.442 "code": -32602, 00:12:48.442 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:48.442 }' 00:12:48.442 10:05:18 -- target/invalid.sh@88 -- # [[ request: 00:12:48.442 { 00:12:48.442 "name": "foobar", 00:12:48.442 "method": "nvmf_delete_target", 00:12:48.442 "req_id": 1 00:12:48.442 } 00:12:48.442 Got JSON-RPC error response 00:12:48.442 response: 00:12:48.442 { 00:12:48.442 "code": -32602, 00:12:48.442 "message": "The specified target doesn't exist, cannot delete it." 00:12:48.442 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:48.442 10:05:18 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:12:48.442 10:05:18 -- target/invalid.sh@91 -- # nvmftestfini 00:12:48.442 10:05:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:48.442 10:05:18 -- nvmf/common.sh@116 -- # sync 00:12:48.701 10:05:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:48.701 10:05:19 -- nvmf/common.sh@119 -- # set +e 00:12:48.701 10:05:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:48.701 10:05:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:48.701 rmmod nvme_tcp 00:12:48.701 rmmod nvme_fabrics 00:12:48.701 rmmod nvme_keyring 00:12:48.701 10:05:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:48.701 10:05:19 -- nvmf/common.sh@123 -- # set -e 00:12:48.701 10:05:19 -- nvmf/common.sh@124 -- # return 0 00:12:48.701 10:05:19 -- nvmf/common.sh@477 -- # '[' -n 78178 ']' 00:12:48.701 10:05:19 -- nvmf/common.sh@478 -- # killprocess 78178 00:12:48.701 10:05:19 -- common/autotest_common.sh@926 -- # '[' -z 78178 ']' 00:12:48.701 10:05:19 -- common/autotest_common.sh@930 -- # kill -0 78178 00:12:48.701 10:05:19 -- common/autotest_common.sh@931 -- # uname 00:12:48.701 10:05:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:48.701 10:05:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78178 00:12:48.701 10:05:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:48.701 10:05:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:48.701 killing process with pid 78178 00:12:48.701 10:05:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78178' 00:12:48.701 10:05:19 -- common/autotest_common.sh@945 -- # kill 78178 00:12:48.701 10:05:19 -- common/autotest_common.sh@950 -- # wait 78178 00:12:48.960 10:05:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:48.960 10:05:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:48.960 10:05:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:48.960 10:05:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:48.960 10:05:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:48.960 10:05:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.960 10:05:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:48.960 10:05:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.960 10:05:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:48.960 ************************************ 00:12:48.960 END TEST nvmf_invalid 00:12:48.960 ************************************ 00:12:48.960 00:12:48.960 real 0m5.428s 00:12:48.960 user 0m21.914s 00:12:48.960 sys 0m1.180s 00:12:48.960 10:05:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:48.960 10:05:19 -- common/autotest_common.sh@10 -- # set +x 00:12:48.960 10:05:19 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:48.960 10:05:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:48.960 10:05:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:48.960 10:05:19 -- common/autotest_common.sh@10 -- # set +x 00:12:48.960 ************************************ 00:12:48.960 START TEST nvmf_abort 00:12:48.960 ************************************ 00:12:48.960 10:05:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:12:48.960 * Looking for test storage... 00:12:48.960 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:48.960 10:05:19 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:48.960 10:05:19 -- nvmf/common.sh@7 -- # uname -s 00:12:48.960 10:05:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:48.960 10:05:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:48.960 10:05:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:48.960 10:05:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:48.960 10:05:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:48.960 10:05:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:48.960 10:05:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:48.960 10:05:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:48.960 10:05:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:48.960 10:05:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:48.960 10:05:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:12:48.960 10:05:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:12:48.960 10:05:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:48.960 10:05:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:48.960 10:05:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:48.960 10:05:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:48.960 10:05:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:48.960 10:05:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:48.960 10:05:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:48.961 10:05:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.961 10:05:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.961 10:05:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.961 10:05:19 -- paths/export.sh@5 -- # export PATH 00:12:48.961 10:05:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:48.961 10:05:19 -- nvmf/common.sh@46 -- # : 0 00:12:48.961 10:05:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:48.961 10:05:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:48.961 10:05:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:48.961 10:05:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:48.961 10:05:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:48.961 10:05:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:48.961 10:05:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:48.961 10:05:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:48.961 10:05:19 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:48.961 10:05:19 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:12:48.961 10:05:19 -- target/abort.sh@14 -- # nvmftestinit 00:12:48.961 10:05:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:48.961 10:05:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:48.961 10:05:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:48.961 10:05:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:48.961 10:05:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:48.961 10:05:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:48.961 10:05:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:48.961 10:05:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:48.961 10:05:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:48.961 10:05:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:48.961 10:05:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:48.961 10:05:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:48.961 10:05:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:48.961 10:05:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:48.961 10:05:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.961 10:05:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:48.961 10:05:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:48.961 10:05:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:48.961 10:05:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:48.961 10:05:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:48.961 10:05:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:48.961 10:05:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.961 10:05:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:48.961 10:05:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:48.961 10:05:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:48.961 10:05:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:48.961 10:05:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:48.961 10:05:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:48.961 Cannot find device "nvmf_tgt_br" 00:12:48.961 10:05:19 -- nvmf/common.sh@154 -- # true 00:12:48.961 10:05:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:48.961 Cannot find device "nvmf_tgt_br2" 00:12:48.961 10:05:19 -- nvmf/common.sh@155 -- # true 00:12:48.961 10:05:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:48.961 10:05:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:48.961 Cannot find device "nvmf_tgt_br" 00:12:48.961 10:05:19 -- nvmf/common.sh@157 -- # true 00:12:48.961 10:05:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:48.961 Cannot find device "nvmf_tgt_br2" 00:12:48.961 10:05:19 -- nvmf/common.sh@158 -- # true 00:12:48.961 10:05:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:49.220 10:05:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:49.220 10:05:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:49.220 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:49.220 10:05:19 -- nvmf/common.sh@161 -- # true 00:12:49.220 10:05:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:49.220 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:49.220 10:05:19 -- nvmf/common.sh@162 -- # true 00:12:49.220 10:05:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:49.220 10:05:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:49.220 10:05:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:49.220 10:05:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:49.220 10:05:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:49.220 10:05:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:49.220 10:05:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:49.220 10:05:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:49.220 10:05:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:49.220 10:05:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:49.221 10:05:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:49.221 10:05:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:49.221 10:05:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:49.221 10:05:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:49.221 10:05:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:49.221 10:05:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:49.221 10:05:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:49.221 10:05:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:49.221 10:05:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:49.221 10:05:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:49.221 10:05:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:49.221 10:05:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:49.221 10:05:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:49.221 10:05:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:49.221 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:49.221 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.048 ms 00:12:49.221 00:12:49.221 --- 10.0.0.2 ping statistics --- 00:12:49.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.221 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:12:49.221 10:05:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:49.221 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:49.221 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:12:49.221 00:12:49.221 --- 10.0.0.3 ping statistics --- 00:12:49.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.221 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:12:49.221 10:05:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:49.221 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:49.221 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:12:49.221 00:12:49.221 --- 10.0.0.1 ping statistics --- 00:12:49.221 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.221 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:12:49.221 10:05:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:49.221 10:05:19 -- nvmf/common.sh@421 -- # return 0 00:12:49.221 10:05:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:49.221 10:05:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:49.221 10:05:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:49.221 10:05:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:49.221 10:05:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:49.221 10:05:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:49.221 10:05:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:49.221 10:05:19 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:12:49.221 10:05:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:49.221 10:05:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:49.221 10:05:19 -- common/autotest_common.sh@10 -- # set +x 00:12:49.480 10:05:19 -- nvmf/common.sh@469 -- # nvmfpid=78678 00:12:49.480 10:05:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:49.480 10:05:19 -- nvmf/common.sh@470 -- # waitforlisten 78678 00:12:49.480 10:05:19 -- common/autotest_common.sh@819 -- # '[' -z 78678 ']' 00:12:49.480 10:05:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.480 10:05:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:49.481 10:05:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.481 10:05:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:49.481 10:05:19 -- common/autotest_common.sh@10 -- # set +x 00:12:49.481 [2024-07-14 10:05:19.814860] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:49.481 [2024-07-14 10:05:19.814958] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:49.481 [2024-07-14 10:05:19.945741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:49.481 [2024-07-14 10:05:20.006913] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:49.481 [2024-07-14 10:05:20.007088] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:49.481 [2024-07-14 10:05:20.007102] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:49.481 [2024-07-14 10:05:20.007112] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:49.481 [2024-07-14 10:05:20.007298] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:49.481 [2024-07-14 10:05:20.007980] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:49.481 [2024-07-14 10:05:20.007985] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:50.417 10:05:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:50.417 10:05:20 -- common/autotest_common.sh@852 -- # return 0 00:12:50.417 10:05:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:50.417 10:05:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:50.417 10:05:20 -- common/autotest_common.sh@10 -- # set +x 00:12:50.417 10:05:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:50.417 10:05:20 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:12:50.417 10:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.417 10:05:20 -- common/autotest_common.sh@10 -- # set +x 00:12:50.417 [2024-07-14 10:05:20.773727] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:50.417 10:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.417 10:05:20 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:12:50.417 10:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.417 10:05:20 -- common/autotest_common.sh@10 -- # set +x 00:12:50.417 Malloc0 00:12:50.417 10:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.417 10:05:20 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:50.417 10:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.417 10:05:20 -- common/autotest_common.sh@10 -- # set +x 00:12:50.417 Delay0 00:12:50.417 10:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.417 10:05:20 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:12:50.417 10:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.417 10:05:20 -- common/autotest_common.sh@10 -- # set +x 00:12:50.417 10:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.418 10:05:20 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:12:50.418 10:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.418 10:05:20 -- common/autotest_common.sh@10 -- # set +x 00:12:50.418 10:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.418 10:05:20 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:12:50.418 10:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.418 10:05:20 -- common/autotest_common.sh@10 -- # set +x 00:12:50.418 [2024-07-14 10:05:20.849632] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:50.418 10:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.418 10:05:20 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:50.418 10:05:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:50.418 10:05:20 -- common/autotest_common.sh@10 -- # set +x 00:12:50.418 10:05:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:50.418 10:05:20 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:12:50.675 [2024-07-14 10:05:21.025372] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:52.574 Initializing NVMe Controllers 00:12:52.574 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:12:52.574 controller IO queue size 128 less than required 00:12:52.574 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:12:52.574 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:12:52.574 Initialization complete. Launching workers. 00:12:52.574 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 39725 00:12:52.574 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 39790, failed to submit 62 00:12:52.574 success 39725, unsuccess 65, failed 0 00:12:52.574 10:05:23 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:12:52.574 10:05:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:52.574 10:05:23 -- common/autotest_common.sh@10 -- # set +x 00:12:52.574 10:05:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:52.574 10:05:23 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:52.574 10:05:23 -- target/abort.sh@38 -- # nvmftestfini 00:12:52.574 10:05:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:52.574 10:05:23 -- nvmf/common.sh@116 -- # sync 00:12:52.575 10:05:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:52.575 10:05:23 -- nvmf/common.sh@119 -- # set +e 00:12:52.575 10:05:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:52.575 10:05:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:52.575 rmmod nvme_tcp 00:12:52.833 rmmod nvme_fabrics 00:12:52.833 rmmod nvme_keyring 00:12:52.833 10:05:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:52.833 10:05:23 -- nvmf/common.sh@123 -- # set -e 00:12:52.833 10:05:23 -- nvmf/common.sh@124 -- # return 0 00:12:52.833 10:05:23 -- nvmf/common.sh@477 -- # '[' -n 78678 ']' 00:12:52.833 10:05:23 -- nvmf/common.sh@478 -- # killprocess 78678 00:12:52.833 10:05:23 -- common/autotest_common.sh@926 -- # '[' -z 78678 ']' 00:12:52.833 10:05:23 -- common/autotest_common.sh@930 -- # kill -0 78678 00:12:52.833 10:05:23 -- common/autotest_common.sh@931 -- # uname 00:12:52.833 10:05:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:52.833 10:05:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78678 00:12:52.833 10:05:23 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:52.833 10:05:23 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:52.833 killing process with pid 78678 00:12:52.833 10:05:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78678' 00:12:52.833 10:05:23 -- common/autotest_common.sh@945 -- # kill 78678 00:12:52.833 10:05:23 -- common/autotest_common.sh@950 -- # wait 78678 00:12:53.091 10:05:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:53.091 10:05:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:53.091 10:05:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:53.091 10:05:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:53.091 10:05:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:53.091 10:05:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.091 10:05:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:53.091 10:05:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.091 10:05:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:53.091 00:12:53.091 real 0m4.081s 00:12:53.091 user 0m11.999s 00:12:53.091 sys 0m0.962s 00:12:53.091 10:05:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:53.091 ************************************ 00:12:53.091 END TEST nvmf_abort 00:12:53.091 ************************************ 00:12:53.091 10:05:23 -- common/autotest_common.sh@10 -- # set +x 00:12:53.091 10:05:23 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:53.091 10:05:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:53.091 10:05:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:53.091 10:05:23 -- common/autotest_common.sh@10 -- # set +x 00:12:53.091 ************************************ 00:12:53.091 START TEST nvmf_ns_hotplug_stress 00:12:53.091 ************************************ 00:12:53.091 10:05:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:12:53.091 * Looking for test storage... 00:12:53.091 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:53.091 10:05:23 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:53.091 10:05:23 -- nvmf/common.sh@7 -- # uname -s 00:12:53.091 10:05:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:53.091 10:05:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:53.091 10:05:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:53.091 10:05:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:53.091 10:05:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:53.091 10:05:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:53.091 10:05:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:53.091 10:05:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:53.091 10:05:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:53.091 10:05:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:53.091 10:05:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:12:53.091 10:05:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:12:53.091 10:05:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:53.091 10:05:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:53.091 10:05:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:53.091 10:05:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:53.091 10:05:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:53.091 10:05:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:53.092 10:05:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:53.092 10:05:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.092 10:05:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.092 10:05:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.092 10:05:23 -- paths/export.sh@5 -- # export PATH 00:12:53.092 10:05:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:53.092 10:05:23 -- nvmf/common.sh@46 -- # : 0 00:12:53.092 10:05:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:53.092 10:05:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:53.092 10:05:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:53.092 10:05:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:53.092 10:05:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:53.092 10:05:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:53.092 10:05:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:53.092 10:05:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:53.092 10:05:23 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:53.092 10:05:23 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:12:53.092 10:05:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:53.092 10:05:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:53.092 10:05:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:53.092 10:05:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:53.092 10:05:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:53.092 10:05:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.092 10:05:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:53.092 10:05:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:53.092 10:05:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:53.092 10:05:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:53.092 10:05:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:53.092 10:05:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:53.092 10:05:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:53.092 10:05:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:53.092 10:05:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:53.092 10:05:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:53.092 10:05:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:53.092 10:05:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:53.092 10:05:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:53.092 10:05:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:53.092 10:05:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:53.092 10:05:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:53.092 10:05:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:53.092 10:05:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:53.092 10:05:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:53.092 10:05:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:53.092 10:05:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:53.092 10:05:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:53.092 Cannot find device "nvmf_tgt_br" 00:12:53.092 10:05:23 -- nvmf/common.sh@154 -- # true 00:12:53.092 10:05:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:53.092 Cannot find device "nvmf_tgt_br2" 00:12:53.092 10:05:23 -- nvmf/common.sh@155 -- # true 00:12:53.092 10:05:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:53.092 10:05:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:53.349 Cannot find device "nvmf_tgt_br" 00:12:53.349 10:05:23 -- nvmf/common.sh@157 -- # true 00:12:53.349 10:05:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:53.349 Cannot find device "nvmf_tgt_br2" 00:12:53.349 10:05:23 -- nvmf/common.sh@158 -- # true 00:12:53.349 10:05:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:53.349 10:05:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:53.349 10:05:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:53.349 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:53.349 10:05:23 -- nvmf/common.sh@161 -- # true 00:12:53.349 10:05:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:53.349 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:53.349 10:05:23 -- nvmf/common.sh@162 -- # true 00:12:53.349 10:05:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:53.349 10:05:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:53.350 10:05:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:53.350 10:05:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:53.350 10:05:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:53.350 10:05:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:53.350 10:05:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:53.350 10:05:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:53.350 10:05:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:53.350 10:05:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:53.350 10:05:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:53.350 10:05:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:53.350 10:05:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:53.350 10:05:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:53.350 10:05:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:53.350 10:05:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:53.350 10:05:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:53.350 10:05:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:53.350 10:05:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:53.350 10:05:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:53.350 10:05:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:53.350 10:05:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:53.350 10:05:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:53.350 10:05:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:53.350 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:53.350 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:12:53.350 00:12:53.350 --- 10.0.0.2 ping statistics --- 00:12:53.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.350 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:12:53.350 10:05:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:53.350 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:53.350 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:12:53.350 00:12:53.350 --- 10.0.0.3 ping statistics --- 00:12:53.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.350 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:12:53.350 10:05:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:53.350 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:53.350 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:12:53.350 00:12:53.350 --- 10.0.0.1 ping statistics --- 00:12:53.350 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:53.350 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:12:53.350 10:05:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:53.350 10:05:23 -- nvmf/common.sh@421 -- # return 0 00:12:53.350 10:05:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:53.350 10:05:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:53.350 10:05:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:53.350 10:05:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:53.350 10:05:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:53.350 10:05:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:53.350 10:05:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:53.350 10:05:23 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:12:53.350 10:05:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:53.350 10:05:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:53.350 10:05:23 -- common/autotest_common.sh@10 -- # set +x 00:12:53.350 10:05:23 -- nvmf/common.sh@469 -- # nvmfpid=78941 00:12:53.350 10:05:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:53.350 10:05:23 -- nvmf/common.sh@470 -- # waitforlisten 78941 00:12:53.350 10:05:23 -- common/autotest_common.sh@819 -- # '[' -z 78941 ']' 00:12:53.350 10:05:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:53.350 10:05:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:53.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:53.350 10:05:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:53.350 10:05:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:53.350 10:05:23 -- common/autotest_common.sh@10 -- # set +x 00:12:53.608 [2024-07-14 10:05:23.933676] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:12:53.608 [2024-07-14 10:05:23.933759] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:53.608 [2024-07-14 10:05:24.072355] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:53.608 [2024-07-14 10:05:24.137819] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:53.608 [2024-07-14 10:05:24.137944] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:53.608 [2024-07-14 10:05:24.137955] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:53.608 [2024-07-14 10:05:24.137963] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:53.608 [2024-07-14 10:05:24.138903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:53.608 [2024-07-14 10:05:24.139052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:53.608 [2024-07-14 10:05:24.139055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.542 10:05:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:54.542 10:05:24 -- common/autotest_common.sh@852 -- # return 0 00:12:54.542 10:05:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:54.542 10:05:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:54.542 10:05:24 -- common/autotest_common.sh@10 -- # set +x 00:12:54.542 10:05:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.542 10:05:24 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:12:54.542 10:05:24 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:54.799 [2024-07-14 10:05:25.189819] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:54.800 10:05:25 -- target/ns_hotplug_stress.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:55.058 10:05:25 -- target/ns_hotplug_stress.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.058 [2024-07-14 10:05:25.579871] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.058 10:05:25 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:55.317 10:05:25 -- target/ns_hotplug_stress.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:12:55.575 Malloc0 00:12:55.575 10:05:26 -- target/ns_hotplug_stress.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:55.833 Delay0 00:12:55.834 10:05:26 -- target/ns_hotplug_stress.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:56.092 10:05:26 -- target/ns_hotplug_stress.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:12:56.351 NULL1 00:12:56.351 10:05:26 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:56.351 10:05:26 -- target/ns_hotplug_stress.sh@40 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:12:56.351 10:05:26 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=79071 00:12:56.351 10:05:26 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79071 00:12:56.351 10:05:26 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:57.728 Read completed with error (sct=0, sc=11) 00:12:57.728 10:05:28 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:57.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:57.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:57.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:57.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:57.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:57.987 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:57.987 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:12:57.987 10:05:28 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:12:57.987 10:05:28 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:12:58.246 true 00:12:58.246 10:05:28 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79071 00:12:58.246 10:05:28 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.183 10:05:29 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.183 10:05:29 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:12:59.184 10:05:29 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:12:59.442 true 00:12:59.442 10:05:29 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79071 00:12:59.442 10:05:29 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:59.701 10:05:30 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:59.959 10:05:30 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:12:59.959 10:05:30 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:12:59.959 true 00:12:59.959 10:05:30 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79071 00:12:59.959 10:05:30 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:00.896 10:05:31 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:01.155 10:05:31 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:01.155 10:05:31 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:01.414 true 00:13:01.414 10:05:31 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79071 00:13:01.414 10:05:31 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:01.672 10:05:32 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:01.931 10:05:32 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:01.931 10:05:32 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:02.190 true 00:13:02.190 10:05:32 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79071 00:13:02.190 10:05:32 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.126 10:05:33 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:03.126 10:05:33 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:03.126 10:05:33 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:03.384 true 00:13:03.384 10:05:33 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79071 00:13:03.384 10:05:33 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:03.642 10:05:34 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:03.900 10:05:34 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:03.900 10:05:34 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:03.900 true 00:13:03.900 10:05:34 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79071 00:13:03.900 10:05:34 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.275 10:05:35 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:05.275 10:05:35 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:05.275 10:05:35 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:05.534 true 00:13:05.534 10:05:35 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79071 00:13:05.534 10:05:35 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:05.792 10:05:36 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:05.792 10:05:36 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:05.792 10:05:36 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:06.051 true 00:13:06.051 10:05:36 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79071 00:13:06.051 10:05:36 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:06.987 10:05:37 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:07.245 10:05:37 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:07.245 10:05:37 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:07.504 true 00:13:07.504 10:05:37 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79071 00:13:07.504 10:05:37 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:07.504 10:05:38 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:07.761 10:05:38 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:07.762 10:05:38 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:08.019 true 00:13:08.019 10:05:38 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79071 00:13:08.019 10:05:38 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:08.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:08.954 10:05:39 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:08.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:08.954 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:09.212 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:09.212 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:09.212 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:09.212 10:05:39 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:09.212 10:05:39 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:09.471 true 00:13:09.471 10:05:39 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79071 00:13:09.471 10:05:39 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.406 10:05:40 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:10.406 10:05:40 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:10.406 10:05:40 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:10.664 true 00:13:10.664 10:05:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79071 00:13:10.664 10:05:41 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:10.920 10:05:41 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:11.178 10:05:41 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:11.178 10:05:41 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:11.178 true 00:13:11.178 10:05:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79071 00:13:11.178 10:05:41 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.160 10:05:42 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:12.160 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:12.429 10:05:42 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:12.429 10:05:42 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:12.688 true 00:13:12.688 10:05:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79071 00:13:12.688 10:05:43 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:12.947 10:05:43 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:13.207 10:05:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:13.207 10:05:43 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:13.466 true 00:13:13.466 10:05:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79071 00:13:13.466 10:05:43 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.402 10:05:44 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:14.402 10:05:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:14.402 10:05:44 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:14.660 true 00:13:14.660 10:05:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79071 00:13:14.660 10:05:45 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:14.918 10:05:45 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:15.175 10:05:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:15.175 10:05:45 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:15.434 true 00:13:15.434 10:05:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79071 00:13:15.434 10:05:45 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.369 10:05:46 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:16.629 10:05:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:16.629 10:05:46 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:16.888 true 00:13:16.888 10:05:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79071 00:13:16.888 10:05:47 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.888 10:05:47 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.147 10:05:47 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:17.147 10:05:47 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:17.405 true 00:13:17.405 10:05:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79071 00:13:17.405 10:05:47 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.341 10:05:48 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:18.599 10:05:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:18.599 10:05:48 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:18.599 true 00:13:18.599 10:05:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79071 00:13:18.599 10:05:49 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.858 10:05:49 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.116 10:05:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:19.116 10:05:49 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:19.375 true 00:13:19.375 10:05:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79071 00:13:19.375 10:05:49 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.310 10:05:50 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:20.569 10:05:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:20.569 10:05:50 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:20.827 true 00:13:20.827 10:05:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79071 00:13:20.827 10:05:51 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.086 10:05:51 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.086 10:05:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:21.086 10:05:51 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:21.345 true 00:13:21.345 10:05:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79071 00:13:21.345 10:05:51 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.281 10:05:52 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:22.541 10:05:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:22.541 10:05:52 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:22.800 true 00:13:22.800 10:05:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79071 00:13:22.800 10:05:53 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.057 10:05:53 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.057 10:05:53 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:23.057 10:05:53 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:23.315 true 00:13:23.315 10:05:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79071 00:13:23.315 10:05:53 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.250 10:05:54 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.509 10:05:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:24.509 10:05:54 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:24.768 true 00:13:24.768 10:05:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79071 00:13:24.768 10:05:55 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.027 10:05:55 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.286 10:05:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:25.286 10:05:55 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:25.286 true 00:13:25.286 10:05:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79071 00:13:25.286 10:05:55 -- target/ns_hotplug_stress.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.222 10:05:56 -- target/ns_hotplug_stress.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.481 10:05:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:26.481 10:05:57 -- target/ns_hotplug_stress.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:26.740 Initializing NVMe Controllers 00:13:26.740 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:26.740 Controller IO queue size 128, less than required. 00:13:26.740 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:26.740 Controller IO queue size 128, less than required. 00:13:26.740 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:26.740 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:26.740 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:26.740 Initialization complete. Launching workers. 00:13:26.740 ======================================================== 00:13:26.740 Latency(us) 00:13:26.740 Device Information : IOPS MiB/s Average min max 00:13:26.740 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 565.75 0.28 131289.72 2876.52 1107207.21 00:13:26.740 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 14595.55 7.13 8770.13 1438.03 582259.04 00:13:26.740 ======================================================== 00:13:26.740 Total : 15161.30 7.40 13341.99 1438.03 1107207.21 00:13:26.740 00:13:26.740 true 00:13:26.740 10:05:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 79071 00:13:26.740 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (79071) - No such process 00:13:26.740 10:05:57 -- target/ns_hotplug_stress.sh@53 -- # wait 79071 00:13:26.740 10:05:57 -- target/ns_hotplug_stress.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.999 10:05:57 -- target/ns_hotplug_stress.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:27.258 10:05:57 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:27.258 10:05:57 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:27.258 10:05:57 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:27.258 10:05:57 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:27.258 10:05:57 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:27.517 null0 00:13:27.517 10:05:57 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:27.517 10:05:57 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:27.517 10:05:57 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:27.775 null1 00:13:27.775 10:05:58 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:27.775 10:05:58 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:27.775 10:05:58 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:27.775 null2 00:13:27.775 10:05:58 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:27.775 10:05:58 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:27.775 10:05:58 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:28.052 null3 00:13:28.052 10:05:58 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:28.052 10:05:58 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:28.052 10:05:58 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:28.330 null4 00:13:28.330 10:05:58 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:28.330 10:05:58 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:28.330 10:05:58 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:28.589 null5 00:13:28.589 10:05:58 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:28.589 10:05:58 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:28.589 10:05:58 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:28.589 null6 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:28.848 null7 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@66 -- # wait 80092 80093 80096 80098 80099 80101 80104 80106 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:28.848 10:05:59 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:29.107 10:05:59 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:29.107 10:05:59 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:29.107 10:05:59 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:29.107 10:05:59 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.107 10:05:59 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:29.365 10:05:59 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:29.365 10:05:59 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:29.365 10:05:59 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:29.365 10:05:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:29.365 10:05:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:29.365 10:05:59 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:29.365 10:05:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:29.365 10:05:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:29.365 10:05:59 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:29.365 10:05:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:29.365 10:05:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:29.366 10:05:59 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:29.366 10:05:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:29.366 10:05:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:29.366 10:05:59 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:29.624 10:05:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:29.624 10:05:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:29.624 10:05:59 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:29.624 10:05:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:29.624 10:05:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:29.624 10:05:59 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:29.624 10:05:59 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:29.624 10:05:59 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:29.624 10:05:59 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:29.624 10:06:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:29.624 10:06:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:29.624 10:06:00 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:29.624 10:06:00 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:29.624 10:06:00 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:29.624 10:06:00 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.624 10:06:00 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:29.882 10:06:00 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:29.882 10:06:00 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:29.882 10:06:00 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:29.882 10:06:00 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:29.882 10:06:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:29.882 10:06:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:29.882 10:06:00 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:29.882 10:06:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:29.882 10:06:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:29.882 10:06:00 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:29.882 10:06:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:29.882 10:06:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:29.882 10:06:00 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:29.882 10:06:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:29.882 10:06:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:29.882 10:06:00 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:29.882 10:06:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:29.882 10:06:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:29.882 10:06:00 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:30.140 10:06:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.140 10:06:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.140 10:06:00 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:30.140 10:06:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.140 10:06:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.140 10:06:00 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:30.140 10:06:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.140 10:06:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.140 10:06:00 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:30.140 10:06:00 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:30.140 10:06:00 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:30.140 10:06:00 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:30.140 10:06:00 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.398 10:06:00 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:30.398 10:06:00 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:30.398 10:06:00 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:30.398 10:06:00 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:30.398 10:06:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.398 10:06:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.398 10:06:00 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:30.398 10:06:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.398 10:06:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.398 10:06:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.398 10:06:00 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:30.398 10:06:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.398 10:06:00 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:30.398 10:06:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.398 10:06:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.398 10:06:00 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:30.398 10:06:00 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.398 10:06:00 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.398 10:06:00 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:30.656 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.656 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.656 10:06:01 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:30.656 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.656 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.656 10:06:01 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:30.656 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.656 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.656 10:06:01 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:30.656 10:06:01 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:30.656 10:06:01 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:30.656 10:06:01 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:30.914 10:06:01 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:30.914 10:06:01 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:30.914 10:06:01 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:30.914 10:06:01 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:30.914 10:06:01 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.914 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.914 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.914 10:06:01 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:30.914 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.914 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.914 10:06:01 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:30.914 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.914 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.914 10:06:01 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:30.914 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:30.914 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:30.914 10:06:01 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:31.173 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.173 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.173 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.173 10:06:01 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:31.173 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.173 10:06:01 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:31.173 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.173 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.173 10:06:01 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:31.173 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.173 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.173 10:06:01 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:31.173 10:06:01 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:31.173 10:06:01 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:31.173 10:06:01 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:31.173 10:06:01 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:31.430 10:06:01 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:31.430 10:06:01 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:31.430 10:06:01 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.430 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.430 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.430 10:06:01 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:31.430 10:06:01 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:31.431 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.431 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.431 10:06:01 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:31.431 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.431 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.431 10:06:01 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:31.431 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.431 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.431 10:06:01 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:31.431 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.431 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.431 10:06:01 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:31.688 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.688 10:06:01 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.688 10:06:01 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:31.688 10:06:02 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.688 10:06:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.689 10:06:02 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:31.689 10:06:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:31.689 10:06:02 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.689 10:06:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.689 10:06:02 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:31.689 10:06:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:31.689 10:06:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:31.689 10:06:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:31.689 10:06:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:31.947 10:06:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.947 10:06:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:31.947 10:06:02 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.947 10:06:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:31.947 10:06:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.947 10:06:02 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:31.947 10:06:02 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.947 10:06:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.947 10:06:02 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:31.947 10:06:02 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.947 10:06:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.947 10:06:02 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:31.947 10:06:02 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:31.947 10:06:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:31.947 10:06:02 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:32.205 10:06:02 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.205 10:06:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.205 10:06:02 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:32.205 10:06:02 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.205 10:06:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.205 10:06:02 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:32.205 10:06:02 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.205 10:06:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.205 10:06:02 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:32.205 10:06:02 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.205 10:06:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.205 10:06:02 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:32.205 10:06:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:32.205 10:06:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:32.205 10:06:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:32.205 10:06:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:32.463 10:06:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.463 10:06:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:32.463 10:06:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:32.463 10:06:02 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:32.463 10:06:02 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.463 10:06:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.463 10:06:02 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:32.463 10:06:02 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.463 10:06:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.463 10:06:02 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:32.463 10:06:02 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.463 10:06:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.463 10:06:02 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:32.463 10:06:02 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.463 10:06:02 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.463 10:06:02 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:32.722 10:06:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.722 10:06:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.722 10:06:03 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:32.722 10:06:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.722 10:06:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.722 10:06:03 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:32.722 10:06:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.722 10:06:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.722 10:06:03 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:32.722 10:06:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.722 10:06:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.722 10:06:03 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:32.722 10:06:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:32.722 10:06:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:32.722 10:06:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:32.981 10:06:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.981 10:06:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:32.981 10:06:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:32.981 10:06:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:32.981 10:06:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:32.981 10:06:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.981 10:06:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.981 10:06:03 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:32.981 10:06:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.981 10:06:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.981 10:06:03 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:32.981 10:06:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.981 10:06:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.981 10:06:03 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:32.981 10:06:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.981 10:06:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.981 10:06:03 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:32.981 10:06:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:32.981 10:06:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:32.981 10:06:03 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:33.239 10:06:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:33.239 10:06:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.239 10:06:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.239 10:06:03 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:33.239 10:06:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:33.239 10:06:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.239 10:06:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.239 10:06:03 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:33.239 10:06:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.239 10:06:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.239 10:06:03 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:33.239 10:06:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:33.239 10:06:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.239 10:06:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:33.497 10:06:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.497 10:06:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.497 10:06:03 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:33.497 10:06:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:33.497 10:06:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.497 10:06:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.497 10:06:03 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:33.497 10:06:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:33.497 10:06:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:33.497 10:06:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.497 10:06:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.497 10:06:03 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:33.497 10:06:03 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.497 10:06:03 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.497 10:06:03 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:33.497 10:06:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.497 10:06:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.497 10:06:04 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:33.756 10:06:04 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:33.756 10:06:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.756 10:06:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.756 10:06:04 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:33.756 10:06:04 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:33.756 10:06:04 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:33.756 10:06:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.756 10:06:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.756 10:06:04 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:33.756 10:06:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:33.756 10:06:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:33.756 10:06:04 -- target/ns_hotplug_stress.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:33.756 10:06:04 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:33.756 10:06:04 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.015 10:06:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.015 10:06:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.015 10:06:04 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:34.015 10:06:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.015 10:06:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.015 10:06:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.015 10:06:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.015 10:06:04 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:34.015 10:06:04 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:34.015 10:06:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.015 10:06:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.015 10:06:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.015 10:06:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.274 10:06:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.274 10:06:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.274 10:06:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.274 10:06:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.274 10:06:04 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:34.274 10:06:04 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:34.274 10:06:04 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:34.274 10:06:04 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:34.274 10:06:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:34.274 10:06:04 -- nvmf/common.sh@116 -- # sync 00:13:34.274 10:06:04 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:34.274 10:06:04 -- nvmf/common.sh@119 -- # set +e 00:13:34.274 10:06:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:34.274 10:06:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:34.274 rmmod nvme_tcp 00:13:34.274 rmmod nvme_fabrics 00:13:34.274 rmmod nvme_keyring 00:13:34.274 10:06:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:34.274 10:06:04 -- nvmf/common.sh@123 -- # set -e 00:13:34.274 10:06:04 -- nvmf/common.sh@124 -- # return 0 00:13:34.274 10:06:04 -- nvmf/common.sh@477 -- # '[' -n 78941 ']' 00:13:34.274 10:06:04 -- nvmf/common.sh@478 -- # killprocess 78941 00:13:34.274 10:06:04 -- common/autotest_common.sh@926 -- # '[' -z 78941 ']' 00:13:34.274 10:06:04 -- common/autotest_common.sh@930 -- # kill -0 78941 00:13:34.274 10:06:04 -- common/autotest_common.sh@931 -- # uname 00:13:34.274 10:06:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:34.274 10:06:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78941 00:13:34.274 killing process with pid 78941 00:13:34.274 10:06:04 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:34.274 10:06:04 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:34.274 10:06:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78941' 00:13:34.274 10:06:04 -- common/autotest_common.sh@945 -- # kill 78941 00:13:34.274 10:06:04 -- common/autotest_common.sh@950 -- # wait 78941 00:13:34.533 10:06:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:34.533 10:06:04 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:34.533 10:06:04 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:34.533 10:06:04 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:34.533 10:06:04 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:34.533 10:06:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.533 10:06:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:34.533 10:06:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.533 10:06:04 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:34.533 00:13:34.533 real 0m41.488s 00:13:34.533 user 3m17.052s 00:13:34.533 sys 0m12.275s 00:13:34.533 10:06:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:34.533 10:06:04 -- common/autotest_common.sh@10 -- # set +x 00:13:34.533 ************************************ 00:13:34.533 END TEST nvmf_ns_hotplug_stress 00:13:34.533 ************************************ 00:13:34.533 10:06:05 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:34.533 10:06:05 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:34.533 10:06:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:34.533 10:06:05 -- common/autotest_common.sh@10 -- # set +x 00:13:34.533 ************************************ 00:13:34.533 START TEST nvmf_connect_stress 00:13:34.533 ************************************ 00:13:34.533 10:06:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:34.793 * Looking for test storage... 00:13:34.793 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:34.793 10:06:05 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:34.793 10:06:05 -- nvmf/common.sh@7 -- # uname -s 00:13:34.793 10:06:05 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:34.793 10:06:05 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:34.793 10:06:05 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:34.793 10:06:05 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:34.793 10:06:05 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:34.793 10:06:05 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:34.793 10:06:05 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:34.793 10:06:05 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:34.793 10:06:05 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:34.793 10:06:05 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:34.793 10:06:05 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:13:34.793 10:06:05 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:13:34.793 10:06:05 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:34.793 10:06:05 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:34.793 10:06:05 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:34.793 10:06:05 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:34.793 10:06:05 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:34.793 10:06:05 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:34.793 10:06:05 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:34.794 10:06:05 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.794 10:06:05 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.794 10:06:05 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.794 10:06:05 -- paths/export.sh@5 -- # export PATH 00:13:34.794 10:06:05 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:34.794 10:06:05 -- nvmf/common.sh@46 -- # : 0 00:13:34.794 10:06:05 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:34.794 10:06:05 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:34.794 10:06:05 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:34.794 10:06:05 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:34.794 10:06:05 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:34.794 10:06:05 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:34.794 10:06:05 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:34.794 10:06:05 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:34.794 10:06:05 -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:34.794 10:06:05 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:34.794 10:06:05 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:34.794 10:06:05 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:34.794 10:06:05 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:34.794 10:06:05 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:34.794 10:06:05 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:34.794 10:06:05 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:34.794 10:06:05 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:34.794 10:06:05 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:34.794 10:06:05 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:34.794 10:06:05 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:34.794 10:06:05 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:34.794 10:06:05 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:34.794 10:06:05 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:34.794 10:06:05 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:34.794 10:06:05 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:34.794 10:06:05 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:34.794 10:06:05 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:34.794 10:06:05 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:34.794 10:06:05 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:34.794 10:06:05 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:34.794 10:06:05 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:34.794 10:06:05 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:34.794 10:06:05 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:34.794 10:06:05 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:34.794 10:06:05 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:34.794 10:06:05 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:34.794 10:06:05 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:34.794 Cannot find device "nvmf_tgt_br" 00:13:34.794 10:06:05 -- nvmf/common.sh@154 -- # true 00:13:34.794 10:06:05 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:34.794 Cannot find device "nvmf_tgt_br2" 00:13:34.794 10:06:05 -- nvmf/common.sh@155 -- # true 00:13:34.794 10:06:05 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:34.794 10:06:05 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:34.794 Cannot find device "nvmf_tgt_br" 00:13:34.794 10:06:05 -- nvmf/common.sh@157 -- # true 00:13:34.794 10:06:05 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:34.794 Cannot find device "nvmf_tgt_br2" 00:13:34.794 10:06:05 -- nvmf/common.sh@158 -- # true 00:13:34.794 10:06:05 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:34.794 10:06:05 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:34.794 10:06:05 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:34.794 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:34.794 10:06:05 -- nvmf/common.sh@161 -- # true 00:13:34.794 10:06:05 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:34.794 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:34.794 10:06:05 -- nvmf/common.sh@162 -- # true 00:13:34.794 10:06:05 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:34.794 10:06:05 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:34.794 10:06:05 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:34.794 10:06:05 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:34.794 10:06:05 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:34.794 10:06:05 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:34.794 10:06:05 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:34.794 10:06:05 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:34.794 10:06:05 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:34.794 10:06:05 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:34.794 10:06:05 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:34.794 10:06:05 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:34.794 10:06:05 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:34.794 10:06:05 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:35.054 10:06:05 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:35.054 10:06:05 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:35.054 10:06:05 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:35.054 10:06:05 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:35.054 10:06:05 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:35.054 10:06:05 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:35.054 10:06:05 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:35.054 10:06:05 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:35.054 10:06:05 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:35.054 10:06:05 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:35.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:35.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:13:35.054 00:13:35.054 --- 10.0.0.2 ping statistics --- 00:13:35.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.054 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:13:35.054 10:06:05 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:35.054 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:35.054 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:13:35.054 00:13:35.054 --- 10.0.0.3 ping statistics --- 00:13:35.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.054 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:13:35.054 10:06:05 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:35.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:35.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:13:35.054 00:13:35.054 --- 10.0.0.1 ping statistics --- 00:13:35.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:35.054 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:13:35.054 10:06:05 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:35.054 10:06:05 -- nvmf/common.sh@421 -- # return 0 00:13:35.054 10:06:05 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:35.054 10:06:05 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:35.054 10:06:05 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:35.054 10:06:05 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:35.054 10:06:05 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:35.054 10:06:05 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:35.054 10:06:05 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:35.054 10:06:05 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:35.054 10:06:05 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:35.054 10:06:05 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:35.054 10:06:05 -- common/autotest_common.sh@10 -- # set +x 00:13:35.054 10:06:05 -- nvmf/common.sh@469 -- # nvmfpid=81403 00:13:35.054 10:06:05 -- nvmf/common.sh@470 -- # waitforlisten 81403 00:13:35.054 10:06:05 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:35.054 10:06:05 -- common/autotest_common.sh@819 -- # '[' -z 81403 ']' 00:13:35.054 10:06:05 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:35.054 10:06:05 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:35.054 10:06:05 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:35.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:35.054 10:06:05 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:35.054 10:06:05 -- common/autotest_common.sh@10 -- # set +x 00:13:35.054 [2024-07-14 10:06:05.496966] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:35.054 [2024-07-14 10:06:05.497020] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:35.314 [2024-07-14 10:06:05.632903] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:35.314 [2024-07-14 10:06:05.689319] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:35.314 [2024-07-14 10:06:05.689648] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:35.314 [2024-07-14 10:06:05.689767] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:35.314 [2024-07-14 10:06:05.689839] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:35.314 [2024-07-14 10:06:05.690018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:35.314 [2024-07-14 10:06:05.690563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:35.314 [2024-07-14 10:06:05.690574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:36.251 10:06:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:36.251 10:06:06 -- common/autotest_common.sh@852 -- # return 0 00:13:36.251 10:06:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:36.251 10:06:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:36.251 10:06:06 -- common/autotest_common.sh@10 -- # set +x 00:13:36.251 10:06:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:36.251 10:06:06 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:36.251 10:06:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.251 10:06:06 -- common/autotest_common.sh@10 -- # set +x 00:13:36.251 [2024-07-14 10:06:06.540544] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:36.251 10:06:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.251 10:06:06 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:36.251 10:06:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.251 10:06:06 -- common/autotest_common.sh@10 -- # set +x 00:13:36.251 10:06:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.251 10:06:06 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:36.251 10:06:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.251 10:06:06 -- common/autotest_common.sh@10 -- # set +x 00:13:36.251 [2024-07-14 10:06:06.558375] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:36.251 10:06:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.251 10:06:06 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:36.251 10:06:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.251 10:06:06 -- common/autotest_common.sh@10 -- # set +x 00:13:36.251 NULL1 00:13:36.251 10:06:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.251 10:06:06 -- target/connect_stress.sh@21 -- # PERF_PID=81455 00:13:36.251 10:06:06 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:36.251 10:06:06 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:36.251 10:06:06 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:36.251 10:06:06 -- target/connect_stress.sh@27 -- # seq 1 20 00:13:36.251 10:06:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.251 10:06:06 -- target/connect_stress.sh@28 -- # cat 00:13:36.251 10:06:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.251 10:06:06 -- target/connect_stress.sh@28 -- # cat 00:13:36.251 10:06:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.251 10:06:06 -- target/connect_stress.sh@28 -- # cat 00:13:36.251 10:06:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.251 10:06:06 -- target/connect_stress.sh@28 -- # cat 00:13:36.251 10:06:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.251 10:06:06 -- target/connect_stress.sh@28 -- # cat 00:13:36.251 10:06:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.251 10:06:06 -- target/connect_stress.sh@28 -- # cat 00:13:36.251 10:06:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.251 10:06:06 -- target/connect_stress.sh@28 -- # cat 00:13:36.251 10:06:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.251 10:06:06 -- target/connect_stress.sh@28 -- # cat 00:13:36.251 10:06:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.251 10:06:06 -- target/connect_stress.sh@28 -- # cat 00:13:36.252 10:06:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.252 10:06:06 -- target/connect_stress.sh@28 -- # cat 00:13:36.252 10:06:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.252 10:06:06 -- target/connect_stress.sh@28 -- # cat 00:13:36.252 10:06:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.252 10:06:06 -- target/connect_stress.sh@28 -- # cat 00:13:36.252 10:06:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.252 10:06:06 -- target/connect_stress.sh@28 -- # cat 00:13:36.252 10:06:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.252 10:06:06 -- target/connect_stress.sh@28 -- # cat 00:13:36.252 10:06:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.252 10:06:06 -- target/connect_stress.sh@28 -- # cat 00:13:36.252 10:06:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.252 10:06:06 -- target/connect_stress.sh@28 -- # cat 00:13:36.252 10:06:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.252 10:06:06 -- target/connect_stress.sh@28 -- # cat 00:13:36.252 10:06:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.252 10:06:06 -- target/connect_stress.sh@28 -- # cat 00:13:36.252 10:06:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.252 10:06:06 -- target/connect_stress.sh@28 -- # cat 00:13:36.252 10:06:06 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:36.252 10:06:06 -- target/connect_stress.sh@28 -- # cat 00:13:36.252 10:06:06 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:36.252 10:06:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.252 10:06:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.252 10:06:06 -- common/autotest_common.sh@10 -- # set +x 00:13:36.511 10:06:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.511 10:06:06 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:36.511 10:06:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.511 10:06:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.511 10:06:06 -- common/autotest_common.sh@10 -- # set +x 00:13:36.769 10:06:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:36.769 10:06:07 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:36.769 10:06:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.769 10:06:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:36.769 10:06:07 -- common/autotest_common.sh@10 -- # set +x 00:13:37.337 10:06:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.337 10:06:07 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:37.337 10:06:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.337 10:06:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.337 10:06:07 -- common/autotest_common.sh@10 -- # set +x 00:13:37.597 10:06:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.597 10:06:07 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:37.597 10:06:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.597 10:06:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.597 10:06:07 -- common/autotest_common.sh@10 -- # set +x 00:13:37.856 10:06:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.856 10:06:08 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:37.856 10:06:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.856 10:06:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.856 10:06:08 -- common/autotest_common.sh@10 -- # set +x 00:13:38.115 10:06:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.115 10:06:08 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:38.115 10:06:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.115 10:06:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.115 10:06:08 -- common/autotest_common.sh@10 -- # set +x 00:13:38.375 10:06:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.375 10:06:08 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:38.375 10:06:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.375 10:06:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.375 10:06:08 -- common/autotest_common.sh@10 -- # set +x 00:13:38.943 10:06:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:38.943 10:06:09 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:38.943 10:06:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.943 10:06:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:38.943 10:06:09 -- common/autotest_common.sh@10 -- # set +x 00:13:39.201 10:06:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.201 10:06:09 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:39.201 10:06:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.201 10:06:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.201 10:06:09 -- common/autotest_common.sh@10 -- # set +x 00:13:39.459 10:06:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.459 10:06:09 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:39.459 10:06:09 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.459 10:06:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.459 10:06:09 -- common/autotest_common.sh@10 -- # set +x 00:13:39.716 10:06:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.716 10:06:10 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:39.716 10:06:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.716 10:06:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.716 10:06:10 -- common/autotest_common.sh@10 -- # set +x 00:13:39.974 10:06:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:39.974 10:06:10 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:39.974 10:06:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.974 10:06:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:39.974 10:06:10 -- common/autotest_common.sh@10 -- # set +x 00:13:40.540 10:06:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.540 10:06:10 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:40.540 10:06:10 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.540 10:06:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.540 10:06:10 -- common/autotest_common.sh@10 -- # set +x 00:13:40.799 10:06:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:40.799 10:06:11 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:40.799 10:06:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.799 10:06:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:40.799 10:06:11 -- common/autotest_common.sh@10 -- # set +x 00:13:41.058 10:06:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.058 10:06:11 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:41.058 10:06:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.058 10:06:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.058 10:06:11 -- common/autotest_common.sh@10 -- # set +x 00:13:41.318 10:06:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.318 10:06:11 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:41.318 10:06:11 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.318 10:06:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.318 10:06:11 -- common/autotest_common.sh@10 -- # set +x 00:13:41.577 10:06:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:41.577 10:06:12 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:41.577 10:06:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.577 10:06:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:41.577 10:06:12 -- common/autotest_common.sh@10 -- # set +x 00:13:42.146 10:06:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.146 10:06:12 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:42.146 10:06:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.146 10:06:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.146 10:06:12 -- common/autotest_common.sh@10 -- # set +x 00:13:42.405 10:06:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.405 10:06:12 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:42.405 10:06:12 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.405 10:06:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.405 10:06:12 -- common/autotest_common.sh@10 -- # set +x 00:13:42.665 10:06:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.665 10:06:13 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:42.665 10:06:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.665 10:06:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.665 10:06:13 -- common/autotest_common.sh@10 -- # set +x 00:13:42.922 10:06:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:42.922 10:06:13 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:42.922 10:06:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.922 10:06:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:42.922 10:06:13 -- common/autotest_common.sh@10 -- # set +x 00:13:43.180 10:06:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:43.180 10:06:13 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:43.180 10:06:13 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.180 10:06:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:43.180 10:06:13 -- common/autotest_common.sh@10 -- # set +x 00:13:43.747 10:06:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:43.747 10:06:14 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:43.747 10:06:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:43.747 10:06:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:43.747 10:06:14 -- common/autotest_common.sh@10 -- # set +x 00:13:44.006 10:06:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.006 10:06:14 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:44.006 10:06:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.006 10:06:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.006 10:06:14 -- common/autotest_common.sh@10 -- # set +x 00:13:44.266 10:06:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.266 10:06:14 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:44.266 10:06:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.266 10:06:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.266 10:06:14 -- common/autotest_common.sh@10 -- # set +x 00:13:44.525 10:06:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.525 10:06:14 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:44.525 10:06:14 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.525 10:06:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.525 10:06:14 -- common/autotest_common.sh@10 -- # set +x 00:13:44.784 10:06:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:44.784 10:06:15 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:44.784 10:06:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:44.784 10:06:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:44.784 10:06:15 -- common/autotest_common.sh@10 -- # set +x 00:13:45.352 10:06:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.352 10:06:15 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:45.352 10:06:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.352 10:06:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.352 10:06:15 -- common/autotest_common.sh@10 -- # set +x 00:13:45.611 10:06:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.611 10:06:15 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:45.611 10:06:15 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.611 10:06:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.611 10:06:15 -- common/autotest_common.sh@10 -- # set +x 00:13:45.870 10:06:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:45.870 10:06:16 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:45.870 10:06:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:45.870 10:06:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:45.870 10:06:16 -- common/autotest_common.sh@10 -- # set +x 00:13:46.129 10:06:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:46.129 10:06:16 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:46.129 10:06:16 -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.129 10:06:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:46.129 10:06:16 -- common/autotest_common.sh@10 -- # set +x 00:13:46.388 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:46.388 10:06:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:46.388 10:06:16 -- target/connect_stress.sh@34 -- # kill -0 81455 00:13:46.388 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (81455) - No such process 00:13:46.388 10:06:16 -- target/connect_stress.sh@38 -- # wait 81455 00:13:46.388 10:06:16 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:13:46.388 10:06:16 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:46.388 10:06:16 -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:46.388 10:06:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:46.388 10:06:16 -- nvmf/common.sh@116 -- # sync 00:13:46.647 10:06:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:46.647 10:06:16 -- nvmf/common.sh@119 -- # set +e 00:13:46.647 10:06:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:46.647 10:06:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:46.647 rmmod nvme_tcp 00:13:46.647 rmmod nvme_fabrics 00:13:46.647 rmmod nvme_keyring 00:13:46.647 10:06:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:46.647 10:06:17 -- nvmf/common.sh@123 -- # set -e 00:13:46.647 10:06:17 -- nvmf/common.sh@124 -- # return 0 00:13:46.647 10:06:17 -- nvmf/common.sh@477 -- # '[' -n 81403 ']' 00:13:46.647 10:06:17 -- nvmf/common.sh@478 -- # killprocess 81403 00:13:46.647 10:06:17 -- common/autotest_common.sh@926 -- # '[' -z 81403 ']' 00:13:46.647 10:06:17 -- common/autotest_common.sh@930 -- # kill -0 81403 00:13:46.647 10:06:17 -- common/autotest_common.sh@931 -- # uname 00:13:46.647 10:06:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:46.647 10:06:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81403 00:13:46.647 10:06:17 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:46.647 10:06:17 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:46.647 killing process with pid 81403 00:13:46.647 10:06:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81403' 00:13:46.647 10:06:17 -- common/autotest_common.sh@945 -- # kill 81403 00:13:46.647 10:06:17 -- common/autotest_common.sh@950 -- # wait 81403 00:13:46.906 10:06:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:46.907 10:06:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:46.907 10:06:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:46.907 10:06:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:46.907 10:06:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:46.907 10:06:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.907 10:06:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:46.907 10:06:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.907 10:06:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:46.907 00:13:46.907 real 0m12.233s 00:13:46.907 user 0m41.010s 00:13:46.907 sys 0m3.325s 00:13:46.907 10:06:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:46.907 10:06:17 -- common/autotest_common.sh@10 -- # set +x 00:13:46.907 ************************************ 00:13:46.907 END TEST nvmf_connect_stress 00:13:46.907 ************************************ 00:13:46.907 10:06:17 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:46.907 10:06:17 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:46.907 10:06:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:46.907 10:06:17 -- common/autotest_common.sh@10 -- # set +x 00:13:46.907 ************************************ 00:13:46.907 START TEST nvmf_fused_ordering 00:13:46.907 ************************************ 00:13:46.907 10:06:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:46.907 * Looking for test storage... 00:13:46.907 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:46.907 10:06:17 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:46.907 10:06:17 -- nvmf/common.sh@7 -- # uname -s 00:13:46.907 10:06:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:46.907 10:06:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:46.907 10:06:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:46.907 10:06:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:46.907 10:06:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:46.907 10:06:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:46.907 10:06:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:46.907 10:06:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:46.907 10:06:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:46.907 10:06:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:46.907 10:06:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:13:46.907 10:06:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:13:46.907 10:06:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:46.907 10:06:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:46.907 10:06:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:46.907 10:06:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:46.907 10:06:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:46.907 10:06:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:46.907 10:06:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:46.907 10:06:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.907 10:06:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.907 10:06:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.907 10:06:17 -- paths/export.sh@5 -- # export PATH 00:13:46.907 10:06:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.907 10:06:17 -- nvmf/common.sh@46 -- # : 0 00:13:46.907 10:06:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:46.907 10:06:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:46.907 10:06:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:46.907 10:06:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:46.907 10:06:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:46.907 10:06:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:46.907 10:06:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:46.907 10:06:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:46.907 10:06:17 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:46.907 10:06:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:46.907 10:06:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:46.907 10:06:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:46.907 10:06:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:46.907 10:06:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:46.907 10:06:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:46.907 10:06:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:46.907 10:06:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:46.907 10:06:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:46.907 10:06:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:46.907 10:06:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:46.907 10:06:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:46.907 10:06:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:46.907 10:06:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:46.907 10:06:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:46.907 10:06:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:46.907 10:06:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:46.907 10:06:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:46.907 10:06:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:46.907 10:06:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:46.907 10:06:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:46.907 10:06:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:46.907 10:06:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:46.907 10:06:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:46.907 10:06:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:46.907 10:06:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:46.907 10:06:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:46.907 10:06:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:46.907 Cannot find device "nvmf_tgt_br" 00:13:46.907 10:06:17 -- nvmf/common.sh@154 -- # true 00:13:46.907 10:06:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:46.907 Cannot find device "nvmf_tgt_br2" 00:13:46.907 10:06:17 -- nvmf/common.sh@155 -- # true 00:13:46.907 10:06:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:47.166 10:06:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:47.166 Cannot find device "nvmf_tgt_br" 00:13:47.166 10:06:17 -- nvmf/common.sh@157 -- # true 00:13:47.166 10:06:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:47.166 Cannot find device "nvmf_tgt_br2" 00:13:47.166 10:06:17 -- nvmf/common.sh@158 -- # true 00:13:47.166 10:06:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:47.166 10:06:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:47.166 10:06:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:47.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:47.166 10:06:17 -- nvmf/common.sh@161 -- # true 00:13:47.166 10:06:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:47.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:47.166 10:06:17 -- nvmf/common.sh@162 -- # true 00:13:47.166 10:06:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:47.166 10:06:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:47.166 10:06:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:47.166 10:06:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:47.166 10:06:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:47.166 10:06:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:47.166 10:06:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:47.166 10:06:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:47.166 10:06:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:47.166 10:06:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:47.166 10:06:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:47.166 10:06:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:47.166 10:06:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:47.166 10:06:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:47.166 10:06:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:47.166 10:06:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:47.166 10:06:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:47.166 10:06:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:47.166 10:06:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:47.166 10:06:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:47.166 10:06:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:47.166 10:06:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:47.166 10:06:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:47.166 10:06:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:47.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:47.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:13:47.166 00:13:47.166 --- 10.0.0.2 ping statistics --- 00:13:47.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.166 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:13:47.166 10:06:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:47.166 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:47.166 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:13:47.166 00:13:47.166 --- 10.0.0.3 ping statistics --- 00:13:47.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.166 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:13:47.166 10:06:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:47.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:47.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:13:47.166 00:13:47.166 --- 10.0.0.1 ping statistics --- 00:13:47.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:47.166 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:13:47.166 10:06:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:47.166 10:06:17 -- nvmf/common.sh@421 -- # return 0 00:13:47.166 10:06:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:47.166 10:06:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:47.166 10:06:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:47.166 10:06:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:47.166 10:06:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:47.166 10:06:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:47.166 10:06:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:47.166 10:06:17 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:47.166 10:06:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:47.166 10:06:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:47.166 10:06:17 -- common/autotest_common.sh@10 -- # set +x 00:13:47.166 10:06:17 -- nvmf/common.sh@469 -- # nvmfpid=81785 00:13:47.166 10:06:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:47.166 10:06:17 -- nvmf/common.sh@470 -- # waitforlisten 81785 00:13:47.166 10:06:17 -- common/autotest_common.sh@819 -- # '[' -z 81785 ']' 00:13:47.166 10:06:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:47.166 10:06:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:47.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:47.166 10:06:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:47.166 10:06:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:47.166 10:06:17 -- common/autotest_common.sh@10 -- # set +x 00:13:47.430 [2024-07-14 10:06:17.762677] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:47.430 [2024-07-14 10:06:17.762762] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:47.430 [2024-07-14 10:06:17.902727] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:47.716 [2024-07-14 10:06:17.976030] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:47.716 [2024-07-14 10:06:17.976182] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:47.716 [2024-07-14 10:06:17.976225] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:47.716 [2024-07-14 10:06:17.976233] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:47.716 [2024-07-14 10:06:17.976305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.295 10:06:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:48.295 10:06:18 -- common/autotest_common.sh@852 -- # return 0 00:13:48.295 10:06:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:48.295 10:06:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:48.295 10:06:18 -- common/autotest_common.sh@10 -- # set +x 00:13:48.295 10:06:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:48.295 10:06:18 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:48.295 10:06:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.295 10:06:18 -- common/autotest_common.sh@10 -- # set +x 00:13:48.295 [2024-07-14 10:06:18.746365] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:48.295 10:06:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.295 10:06:18 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:48.295 10:06:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.295 10:06:18 -- common/autotest_common.sh@10 -- # set +x 00:13:48.295 10:06:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.295 10:06:18 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.295 10:06:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.295 10:06:18 -- common/autotest_common.sh@10 -- # set +x 00:13:48.295 [2024-07-14 10:06:18.762504] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.295 10:06:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.295 10:06:18 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:48.295 10:06:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.295 10:06:18 -- common/autotest_common.sh@10 -- # set +x 00:13:48.295 NULL1 00:13:48.295 10:06:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.295 10:06:18 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:48.295 10:06:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.295 10:06:18 -- common/autotest_common.sh@10 -- # set +x 00:13:48.295 10:06:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.295 10:06:18 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:48.295 10:06:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:48.295 10:06:18 -- common/autotest_common.sh@10 -- # set +x 00:13:48.295 10:06:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:48.295 10:06:18 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:48.295 [2024-07-14 10:06:18.809057] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:48.295 [2024-07-14 10:06:18.809110] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81835 ] 00:13:48.861 Attached to nqn.2016-06.io.spdk:cnode1 00:13:48.861 Namespace ID: 1 size: 1GB 00:13:48.861 fused_ordering(0) 00:13:48.861 fused_ordering(1) 00:13:48.861 fused_ordering(2) 00:13:48.861 fused_ordering(3) 00:13:48.861 fused_ordering(4) 00:13:48.861 fused_ordering(5) 00:13:48.861 fused_ordering(6) 00:13:48.861 fused_ordering(7) 00:13:48.861 fused_ordering(8) 00:13:48.861 fused_ordering(9) 00:13:48.861 fused_ordering(10) 00:13:48.861 fused_ordering(11) 00:13:48.861 fused_ordering(12) 00:13:48.861 fused_ordering(13) 00:13:48.861 fused_ordering(14) 00:13:48.861 fused_ordering(15) 00:13:48.861 fused_ordering(16) 00:13:48.861 fused_ordering(17) 00:13:48.861 fused_ordering(18) 00:13:48.861 fused_ordering(19) 00:13:48.861 fused_ordering(20) 00:13:48.861 fused_ordering(21) 00:13:48.861 fused_ordering(22) 00:13:48.861 fused_ordering(23) 00:13:48.861 fused_ordering(24) 00:13:48.861 fused_ordering(25) 00:13:48.861 fused_ordering(26) 00:13:48.861 fused_ordering(27) 00:13:48.861 fused_ordering(28) 00:13:48.861 fused_ordering(29) 00:13:48.861 fused_ordering(30) 00:13:48.861 fused_ordering(31) 00:13:48.861 fused_ordering(32) 00:13:48.861 fused_ordering(33) 00:13:48.861 fused_ordering(34) 00:13:48.861 fused_ordering(35) 00:13:48.861 fused_ordering(36) 00:13:48.861 fused_ordering(37) 00:13:48.861 fused_ordering(38) 00:13:48.861 fused_ordering(39) 00:13:48.861 fused_ordering(40) 00:13:48.861 fused_ordering(41) 00:13:48.861 fused_ordering(42) 00:13:48.861 fused_ordering(43) 00:13:48.861 fused_ordering(44) 00:13:48.861 fused_ordering(45) 00:13:48.861 fused_ordering(46) 00:13:48.861 fused_ordering(47) 00:13:48.861 fused_ordering(48) 00:13:48.861 fused_ordering(49) 00:13:48.861 fused_ordering(50) 00:13:48.861 fused_ordering(51) 00:13:48.861 fused_ordering(52) 00:13:48.861 fused_ordering(53) 00:13:48.861 fused_ordering(54) 00:13:48.861 fused_ordering(55) 00:13:48.861 fused_ordering(56) 00:13:48.861 fused_ordering(57) 00:13:48.861 fused_ordering(58) 00:13:48.861 fused_ordering(59) 00:13:48.861 fused_ordering(60) 00:13:48.861 fused_ordering(61) 00:13:48.861 fused_ordering(62) 00:13:48.861 fused_ordering(63) 00:13:48.861 fused_ordering(64) 00:13:48.861 fused_ordering(65) 00:13:48.861 fused_ordering(66) 00:13:48.861 fused_ordering(67) 00:13:48.861 fused_ordering(68) 00:13:48.861 fused_ordering(69) 00:13:48.861 fused_ordering(70) 00:13:48.861 fused_ordering(71) 00:13:48.861 fused_ordering(72) 00:13:48.861 fused_ordering(73) 00:13:48.861 fused_ordering(74) 00:13:48.861 fused_ordering(75) 00:13:48.861 fused_ordering(76) 00:13:48.861 fused_ordering(77) 00:13:48.861 fused_ordering(78) 00:13:48.861 fused_ordering(79) 00:13:48.861 fused_ordering(80) 00:13:48.861 fused_ordering(81) 00:13:48.861 fused_ordering(82) 00:13:48.861 fused_ordering(83) 00:13:48.861 fused_ordering(84) 00:13:48.861 fused_ordering(85) 00:13:48.861 fused_ordering(86) 00:13:48.861 fused_ordering(87) 00:13:48.861 fused_ordering(88) 00:13:48.861 fused_ordering(89) 00:13:48.861 fused_ordering(90) 00:13:48.861 fused_ordering(91) 00:13:48.861 fused_ordering(92) 00:13:48.861 fused_ordering(93) 00:13:48.861 fused_ordering(94) 00:13:48.861 fused_ordering(95) 00:13:48.861 fused_ordering(96) 00:13:48.861 fused_ordering(97) 00:13:48.861 fused_ordering(98) 00:13:48.861 fused_ordering(99) 00:13:48.861 fused_ordering(100) 00:13:48.861 fused_ordering(101) 00:13:48.861 fused_ordering(102) 00:13:48.861 fused_ordering(103) 00:13:48.861 fused_ordering(104) 00:13:48.861 fused_ordering(105) 00:13:48.861 fused_ordering(106) 00:13:48.861 fused_ordering(107) 00:13:48.861 fused_ordering(108) 00:13:48.861 fused_ordering(109) 00:13:48.861 fused_ordering(110) 00:13:48.861 fused_ordering(111) 00:13:48.861 fused_ordering(112) 00:13:48.861 fused_ordering(113) 00:13:48.861 fused_ordering(114) 00:13:48.861 fused_ordering(115) 00:13:48.861 fused_ordering(116) 00:13:48.861 fused_ordering(117) 00:13:48.861 fused_ordering(118) 00:13:48.861 fused_ordering(119) 00:13:48.861 fused_ordering(120) 00:13:48.861 fused_ordering(121) 00:13:48.861 fused_ordering(122) 00:13:48.861 fused_ordering(123) 00:13:48.861 fused_ordering(124) 00:13:48.861 fused_ordering(125) 00:13:48.861 fused_ordering(126) 00:13:48.861 fused_ordering(127) 00:13:48.861 fused_ordering(128) 00:13:48.861 fused_ordering(129) 00:13:48.861 fused_ordering(130) 00:13:48.861 fused_ordering(131) 00:13:48.861 fused_ordering(132) 00:13:48.861 fused_ordering(133) 00:13:48.861 fused_ordering(134) 00:13:48.861 fused_ordering(135) 00:13:48.861 fused_ordering(136) 00:13:48.861 fused_ordering(137) 00:13:48.861 fused_ordering(138) 00:13:48.861 fused_ordering(139) 00:13:48.861 fused_ordering(140) 00:13:48.861 fused_ordering(141) 00:13:48.861 fused_ordering(142) 00:13:48.861 fused_ordering(143) 00:13:48.861 fused_ordering(144) 00:13:48.861 fused_ordering(145) 00:13:48.861 fused_ordering(146) 00:13:48.861 fused_ordering(147) 00:13:48.861 fused_ordering(148) 00:13:48.861 fused_ordering(149) 00:13:48.861 fused_ordering(150) 00:13:48.861 fused_ordering(151) 00:13:48.861 fused_ordering(152) 00:13:48.861 fused_ordering(153) 00:13:48.861 fused_ordering(154) 00:13:48.861 fused_ordering(155) 00:13:48.861 fused_ordering(156) 00:13:48.861 fused_ordering(157) 00:13:48.861 fused_ordering(158) 00:13:48.861 fused_ordering(159) 00:13:48.861 fused_ordering(160) 00:13:48.861 fused_ordering(161) 00:13:48.861 fused_ordering(162) 00:13:48.861 fused_ordering(163) 00:13:48.861 fused_ordering(164) 00:13:48.861 fused_ordering(165) 00:13:48.861 fused_ordering(166) 00:13:48.861 fused_ordering(167) 00:13:48.861 fused_ordering(168) 00:13:48.861 fused_ordering(169) 00:13:48.861 fused_ordering(170) 00:13:48.861 fused_ordering(171) 00:13:48.861 fused_ordering(172) 00:13:48.861 fused_ordering(173) 00:13:48.861 fused_ordering(174) 00:13:48.861 fused_ordering(175) 00:13:48.861 fused_ordering(176) 00:13:48.861 fused_ordering(177) 00:13:48.861 fused_ordering(178) 00:13:48.861 fused_ordering(179) 00:13:48.861 fused_ordering(180) 00:13:48.861 fused_ordering(181) 00:13:48.861 fused_ordering(182) 00:13:48.861 fused_ordering(183) 00:13:48.861 fused_ordering(184) 00:13:48.861 fused_ordering(185) 00:13:48.861 fused_ordering(186) 00:13:48.861 fused_ordering(187) 00:13:48.861 fused_ordering(188) 00:13:48.861 fused_ordering(189) 00:13:48.861 fused_ordering(190) 00:13:48.861 fused_ordering(191) 00:13:48.861 fused_ordering(192) 00:13:48.861 fused_ordering(193) 00:13:48.861 fused_ordering(194) 00:13:48.861 fused_ordering(195) 00:13:48.861 fused_ordering(196) 00:13:48.861 fused_ordering(197) 00:13:48.861 fused_ordering(198) 00:13:48.861 fused_ordering(199) 00:13:48.861 fused_ordering(200) 00:13:48.861 fused_ordering(201) 00:13:48.861 fused_ordering(202) 00:13:48.861 fused_ordering(203) 00:13:48.861 fused_ordering(204) 00:13:48.861 fused_ordering(205) 00:13:49.119 fused_ordering(206) 00:13:49.119 fused_ordering(207) 00:13:49.119 fused_ordering(208) 00:13:49.119 fused_ordering(209) 00:13:49.119 fused_ordering(210) 00:13:49.119 fused_ordering(211) 00:13:49.119 fused_ordering(212) 00:13:49.119 fused_ordering(213) 00:13:49.119 fused_ordering(214) 00:13:49.119 fused_ordering(215) 00:13:49.119 fused_ordering(216) 00:13:49.119 fused_ordering(217) 00:13:49.119 fused_ordering(218) 00:13:49.119 fused_ordering(219) 00:13:49.119 fused_ordering(220) 00:13:49.119 fused_ordering(221) 00:13:49.119 fused_ordering(222) 00:13:49.119 fused_ordering(223) 00:13:49.119 fused_ordering(224) 00:13:49.119 fused_ordering(225) 00:13:49.119 fused_ordering(226) 00:13:49.119 fused_ordering(227) 00:13:49.119 fused_ordering(228) 00:13:49.119 fused_ordering(229) 00:13:49.119 fused_ordering(230) 00:13:49.119 fused_ordering(231) 00:13:49.119 fused_ordering(232) 00:13:49.119 fused_ordering(233) 00:13:49.119 fused_ordering(234) 00:13:49.119 fused_ordering(235) 00:13:49.119 fused_ordering(236) 00:13:49.119 fused_ordering(237) 00:13:49.119 fused_ordering(238) 00:13:49.119 fused_ordering(239) 00:13:49.119 fused_ordering(240) 00:13:49.119 fused_ordering(241) 00:13:49.119 fused_ordering(242) 00:13:49.119 fused_ordering(243) 00:13:49.119 fused_ordering(244) 00:13:49.119 fused_ordering(245) 00:13:49.119 fused_ordering(246) 00:13:49.119 fused_ordering(247) 00:13:49.119 fused_ordering(248) 00:13:49.119 fused_ordering(249) 00:13:49.119 fused_ordering(250) 00:13:49.119 fused_ordering(251) 00:13:49.119 fused_ordering(252) 00:13:49.119 fused_ordering(253) 00:13:49.119 fused_ordering(254) 00:13:49.119 fused_ordering(255) 00:13:49.119 fused_ordering(256) 00:13:49.119 fused_ordering(257) 00:13:49.119 fused_ordering(258) 00:13:49.119 fused_ordering(259) 00:13:49.119 fused_ordering(260) 00:13:49.119 fused_ordering(261) 00:13:49.119 fused_ordering(262) 00:13:49.119 fused_ordering(263) 00:13:49.119 fused_ordering(264) 00:13:49.119 fused_ordering(265) 00:13:49.119 fused_ordering(266) 00:13:49.119 fused_ordering(267) 00:13:49.119 fused_ordering(268) 00:13:49.119 fused_ordering(269) 00:13:49.119 fused_ordering(270) 00:13:49.119 fused_ordering(271) 00:13:49.119 fused_ordering(272) 00:13:49.119 fused_ordering(273) 00:13:49.119 fused_ordering(274) 00:13:49.119 fused_ordering(275) 00:13:49.119 fused_ordering(276) 00:13:49.119 fused_ordering(277) 00:13:49.119 fused_ordering(278) 00:13:49.119 fused_ordering(279) 00:13:49.119 fused_ordering(280) 00:13:49.119 fused_ordering(281) 00:13:49.119 fused_ordering(282) 00:13:49.119 fused_ordering(283) 00:13:49.119 fused_ordering(284) 00:13:49.119 fused_ordering(285) 00:13:49.119 fused_ordering(286) 00:13:49.119 fused_ordering(287) 00:13:49.119 fused_ordering(288) 00:13:49.119 fused_ordering(289) 00:13:49.119 fused_ordering(290) 00:13:49.119 fused_ordering(291) 00:13:49.120 fused_ordering(292) 00:13:49.120 fused_ordering(293) 00:13:49.120 fused_ordering(294) 00:13:49.120 fused_ordering(295) 00:13:49.120 fused_ordering(296) 00:13:49.120 fused_ordering(297) 00:13:49.120 fused_ordering(298) 00:13:49.120 fused_ordering(299) 00:13:49.120 fused_ordering(300) 00:13:49.120 fused_ordering(301) 00:13:49.120 fused_ordering(302) 00:13:49.120 fused_ordering(303) 00:13:49.120 fused_ordering(304) 00:13:49.120 fused_ordering(305) 00:13:49.120 fused_ordering(306) 00:13:49.120 fused_ordering(307) 00:13:49.120 fused_ordering(308) 00:13:49.120 fused_ordering(309) 00:13:49.120 fused_ordering(310) 00:13:49.120 fused_ordering(311) 00:13:49.120 fused_ordering(312) 00:13:49.120 fused_ordering(313) 00:13:49.120 fused_ordering(314) 00:13:49.120 fused_ordering(315) 00:13:49.120 fused_ordering(316) 00:13:49.120 fused_ordering(317) 00:13:49.120 fused_ordering(318) 00:13:49.120 fused_ordering(319) 00:13:49.120 fused_ordering(320) 00:13:49.120 fused_ordering(321) 00:13:49.120 fused_ordering(322) 00:13:49.120 fused_ordering(323) 00:13:49.120 fused_ordering(324) 00:13:49.120 fused_ordering(325) 00:13:49.120 fused_ordering(326) 00:13:49.120 fused_ordering(327) 00:13:49.120 fused_ordering(328) 00:13:49.120 fused_ordering(329) 00:13:49.120 fused_ordering(330) 00:13:49.120 fused_ordering(331) 00:13:49.120 fused_ordering(332) 00:13:49.120 fused_ordering(333) 00:13:49.120 fused_ordering(334) 00:13:49.120 fused_ordering(335) 00:13:49.120 fused_ordering(336) 00:13:49.120 fused_ordering(337) 00:13:49.120 fused_ordering(338) 00:13:49.120 fused_ordering(339) 00:13:49.120 fused_ordering(340) 00:13:49.120 fused_ordering(341) 00:13:49.120 fused_ordering(342) 00:13:49.120 fused_ordering(343) 00:13:49.120 fused_ordering(344) 00:13:49.120 fused_ordering(345) 00:13:49.120 fused_ordering(346) 00:13:49.120 fused_ordering(347) 00:13:49.120 fused_ordering(348) 00:13:49.120 fused_ordering(349) 00:13:49.120 fused_ordering(350) 00:13:49.120 fused_ordering(351) 00:13:49.120 fused_ordering(352) 00:13:49.120 fused_ordering(353) 00:13:49.120 fused_ordering(354) 00:13:49.120 fused_ordering(355) 00:13:49.120 fused_ordering(356) 00:13:49.120 fused_ordering(357) 00:13:49.120 fused_ordering(358) 00:13:49.120 fused_ordering(359) 00:13:49.120 fused_ordering(360) 00:13:49.120 fused_ordering(361) 00:13:49.120 fused_ordering(362) 00:13:49.120 fused_ordering(363) 00:13:49.120 fused_ordering(364) 00:13:49.120 fused_ordering(365) 00:13:49.120 fused_ordering(366) 00:13:49.120 fused_ordering(367) 00:13:49.120 fused_ordering(368) 00:13:49.120 fused_ordering(369) 00:13:49.120 fused_ordering(370) 00:13:49.120 fused_ordering(371) 00:13:49.120 fused_ordering(372) 00:13:49.120 fused_ordering(373) 00:13:49.120 fused_ordering(374) 00:13:49.120 fused_ordering(375) 00:13:49.120 fused_ordering(376) 00:13:49.120 fused_ordering(377) 00:13:49.120 fused_ordering(378) 00:13:49.120 fused_ordering(379) 00:13:49.120 fused_ordering(380) 00:13:49.120 fused_ordering(381) 00:13:49.120 fused_ordering(382) 00:13:49.120 fused_ordering(383) 00:13:49.120 fused_ordering(384) 00:13:49.120 fused_ordering(385) 00:13:49.120 fused_ordering(386) 00:13:49.120 fused_ordering(387) 00:13:49.120 fused_ordering(388) 00:13:49.120 fused_ordering(389) 00:13:49.120 fused_ordering(390) 00:13:49.120 fused_ordering(391) 00:13:49.120 fused_ordering(392) 00:13:49.120 fused_ordering(393) 00:13:49.120 fused_ordering(394) 00:13:49.120 fused_ordering(395) 00:13:49.120 fused_ordering(396) 00:13:49.120 fused_ordering(397) 00:13:49.120 fused_ordering(398) 00:13:49.120 fused_ordering(399) 00:13:49.120 fused_ordering(400) 00:13:49.120 fused_ordering(401) 00:13:49.120 fused_ordering(402) 00:13:49.120 fused_ordering(403) 00:13:49.120 fused_ordering(404) 00:13:49.120 fused_ordering(405) 00:13:49.120 fused_ordering(406) 00:13:49.120 fused_ordering(407) 00:13:49.120 fused_ordering(408) 00:13:49.120 fused_ordering(409) 00:13:49.120 fused_ordering(410) 00:13:49.379 fused_ordering(411) 00:13:49.379 fused_ordering(412) 00:13:49.379 fused_ordering(413) 00:13:49.379 fused_ordering(414) 00:13:49.379 fused_ordering(415) 00:13:49.379 fused_ordering(416) 00:13:49.379 fused_ordering(417) 00:13:49.379 fused_ordering(418) 00:13:49.379 fused_ordering(419) 00:13:49.379 fused_ordering(420) 00:13:49.379 fused_ordering(421) 00:13:49.379 fused_ordering(422) 00:13:49.379 fused_ordering(423) 00:13:49.379 fused_ordering(424) 00:13:49.379 fused_ordering(425) 00:13:49.379 fused_ordering(426) 00:13:49.379 fused_ordering(427) 00:13:49.379 fused_ordering(428) 00:13:49.379 fused_ordering(429) 00:13:49.379 fused_ordering(430) 00:13:49.379 fused_ordering(431) 00:13:49.379 fused_ordering(432) 00:13:49.379 fused_ordering(433) 00:13:49.379 fused_ordering(434) 00:13:49.379 fused_ordering(435) 00:13:49.379 fused_ordering(436) 00:13:49.379 fused_ordering(437) 00:13:49.379 fused_ordering(438) 00:13:49.379 fused_ordering(439) 00:13:49.379 fused_ordering(440) 00:13:49.379 fused_ordering(441) 00:13:49.379 fused_ordering(442) 00:13:49.379 fused_ordering(443) 00:13:49.379 fused_ordering(444) 00:13:49.379 fused_ordering(445) 00:13:49.379 fused_ordering(446) 00:13:49.379 fused_ordering(447) 00:13:49.379 fused_ordering(448) 00:13:49.379 fused_ordering(449) 00:13:49.379 fused_ordering(450) 00:13:49.379 fused_ordering(451) 00:13:49.379 fused_ordering(452) 00:13:49.379 fused_ordering(453) 00:13:49.379 fused_ordering(454) 00:13:49.379 fused_ordering(455) 00:13:49.379 fused_ordering(456) 00:13:49.379 fused_ordering(457) 00:13:49.379 fused_ordering(458) 00:13:49.379 fused_ordering(459) 00:13:49.379 fused_ordering(460) 00:13:49.379 fused_ordering(461) 00:13:49.379 fused_ordering(462) 00:13:49.379 fused_ordering(463) 00:13:49.379 fused_ordering(464) 00:13:49.379 fused_ordering(465) 00:13:49.379 fused_ordering(466) 00:13:49.379 fused_ordering(467) 00:13:49.379 fused_ordering(468) 00:13:49.379 fused_ordering(469) 00:13:49.379 fused_ordering(470) 00:13:49.379 fused_ordering(471) 00:13:49.379 fused_ordering(472) 00:13:49.379 fused_ordering(473) 00:13:49.379 fused_ordering(474) 00:13:49.379 fused_ordering(475) 00:13:49.379 fused_ordering(476) 00:13:49.379 fused_ordering(477) 00:13:49.379 fused_ordering(478) 00:13:49.379 fused_ordering(479) 00:13:49.379 fused_ordering(480) 00:13:49.379 fused_ordering(481) 00:13:49.379 fused_ordering(482) 00:13:49.379 fused_ordering(483) 00:13:49.379 fused_ordering(484) 00:13:49.379 fused_ordering(485) 00:13:49.379 fused_ordering(486) 00:13:49.379 fused_ordering(487) 00:13:49.379 fused_ordering(488) 00:13:49.379 fused_ordering(489) 00:13:49.379 fused_ordering(490) 00:13:49.379 fused_ordering(491) 00:13:49.379 fused_ordering(492) 00:13:49.379 fused_ordering(493) 00:13:49.379 fused_ordering(494) 00:13:49.379 fused_ordering(495) 00:13:49.379 fused_ordering(496) 00:13:49.379 fused_ordering(497) 00:13:49.379 fused_ordering(498) 00:13:49.379 fused_ordering(499) 00:13:49.379 fused_ordering(500) 00:13:49.379 fused_ordering(501) 00:13:49.379 fused_ordering(502) 00:13:49.379 fused_ordering(503) 00:13:49.379 fused_ordering(504) 00:13:49.379 fused_ordering(505) 00:13:49.379 fused_ordering(506) 00:13:49.379 fused_ordering(507) 00:13:49.379 fused_ordering(508) 00:13:49.379 fused_ordering(509) 00:13:49.379 fused_ordering(510) 00:13:49.379 fused_ordering(511) 00:13:49.379 fused_ordering(512) 00:13:49.379 fused_ordering(513) 00:13:49.379 fused_ordering(514) 00:13:49.379 fused_ordering(515) 00:13:49.379 fused_ordering(516) 00:13:49.379 fused_ordering(517) 00:13:49.379 fused_ordering(518) 00:13:49.379 fused_ordering(519) 00:13:49.379 fused_ordering(520) 00:13:49.379 fused_ordering(521) 00:13:49.379 fused_ordering(522) 00:13:49.379 fused_ordering(523) 00:13:49.379 fused_ordering(524) 00:13:49.379 fused_ordering(525) 00:13:49.379 fused_ordering(526) 00:13:49.379 fused_ordering(527) 00:13:49.379 fused_ordering(528) 00:13:49.379 fused_ordering(529) 00:13:49.379 fused_ordering(530) 00:13:49.379 fused_ordering(531) 00:13:49.379 fused_ordering(532) 00:13:49.379 fused_ordering(533) 00:13:49.379 fused_ordering(534) 00:13:49.379 fused_ordering(535) 00:13:49.379 fused_ordering(536) 00:13:49.379 fused_ordering(537) 00:13:49.379 fused_ordering(538) 00:13:49.379 fused_ordering(539) 00:13:49.379 fused_ordering(540) 00:13:49.379 fused_ordering(541) 00:13:49.379 fused_ordering(542) 00:13:49.379 fused_ordering(543) 00:13:49.379 fused_ordering(544) 00:13:49.379 fused_ordering(545) 00:13:49.379 fused_ordering(546) 00:13:49.379 fused_ordering(547) 00:13:49.379 fused_ordering(548) 00:13:49.379 fused_ordering(549) 00:13:49.379 fused_ordering(550) 00:13:49.379 fused_ordering(551) 00:13:49.379 fused_ordering(552) 00:13:49.379 fused_ordering(553) 00:13:49.379 fused_ordering(554) 00:13:49.379 fused_ordering(555) 00:13:49.379 fused_ordering(556) 00:13:49.379 fused_ordering(557) 00:13:49.379 fused_ordering(558) 00:13:49.379 fused_ordering(559) 00:13:49.379 fused_ordering(560) 00:13:49.379 fused_ordering(561) 00:13:49.379 fused_ordering(562) 00:13:49.379 fused_ordering(563) 00:13:49.379 fused_ordering(564) 00:13:49.379 fused_ordering(565) 00:13:49.379 fused_ordering(566) 00:13:49.379 fused_ordering(567) 00:13:49.379 fused_ordering(568) 00:13:49.379 fused_ordering(569) 00:13:49.379 fused_ordering(570) 00:13:49.379 fused_ordering(571) 00:13:49.379 fused_ordering(572) 00:13:49.379 fused_ordering(573) 00:13:49.379 fused_ordering(574) 00:13:49.379 fused_ordering(575) 00:13:49.379 fused_ordering(576) 00:13:49.379 fused_ordering(577) 00:13:49.379 fused_ordering(578) 00:13:49.379 fused_ordering(579) 00:13:49.379 fused_ordering(580) 00:13:49.379 fused_ordering(581) 00:13:49.379 fused_ordering(582) 00:13:49.379 fused_ordering(583) 00:13:49.379 fused_ordering(584) 00:13:49.379 fused_ordering(585) 00:13:49.379 fused_ordering(586) 00:13:49.379 fused_ordering(587) 00:13:49.379 fused_ordering(588) 00:13:49.379 fused_ordering(589) 00:13:49.379 fused_ordering(590) 00:13:49.379 fused_ordering(591) 00:13:49.379 fused_ordering(592) 00:13:49.379 fused_ordering(593) 00:13:49.379 fused_ordering(594) 00:13:49.379 fused_ordering(595) 00:13:49.379 fused_ordering(596) 00:13:49.379 fused_ordering(597) 00:13:49.379 fused_ordering(598) 00:13:49.379 fused_ordering(599) 00:13:49.379 fused_ordering(600) 00:13:49.379 fused_ordering(601) 00:13:49.379 fused_ordering(602) 00:13:49.379 fused_ordering(603) 00:13:49.379 fused_ordering(604) 00:13:49.379 fused_ordering(605) 00:13:49.379 fused_ordering(606) 00:13:49.379 fused_ordering(607) 00:13:49.379 fused_ordering(608) 00:13:49.379 fused_ordering(609) 00:13:49.379 fused_ordering(610) 00:13:49.379 fused_ordering(611) 00:13:49.379 fused_ordering(612) 00:13:49.379 fused_ordering(613) 00:13:49.379 fused_ordering(614) 00:13:49.379 fused_ordering(615) 00:13:49.637 fused_ordering(616) 00:13:49.637 fused_ordering(617) 00:13:49.637 fused_ordering(618) 00:13:49.637 fused_ordering(619) 00:13:49.637 fused_ordering(620) 00:13:49.637 fused_ordering(621) 00:13:49.637 fused_ordering(622) 00:13:49.637 fused_ordering(623) 00:13:49.637 fused_ordering(624) 00:13:49.637 fused_ordering(625) 00:13:49.637 fused_ordering(626) 00:13:49.637 fused_ordering(627) 00:13:49.637 fused_ordering(628) 00:13:49.637 fused_ordering(629) 00:13:49.637 fused_ordering(630) 00:13:49.637 fused_ordering(631) 00:13:49.637 fused_ordering(632) 00:13:49.637 fused_ordering(633) 00:13:49.637 fused_ordering(634) 00:13:49.637 fused_ordering(635) 00:13:49.637 fused_ordering(636) 00:13:49.637 fused_ordering(637) 00:13:49.637 fused_ordering(638) 00:13:49.637 fused_ordering(639) 00:13:49.637 fused_ordering(640) 00:13:49.637 fused_ordering(641) 00:13:49.637 fused_ordering(642) 00:13:49.637 fused_ordering(643) 00:13:49.637 fused_ordering(644) 00:13:49.637 fused_ordering(645) 00:13:49.638 fused_ordering(646) 00:13:49.638 fused_ordering(647) 00:13:49.638 fused_ordering(648) 00:13:49.638 fused_ordering(649) 00:13:49.638 fused_ordering(650) 00:13:49.638 fused_ordering(651) 00:13:49.638 fused_ordering(652) 00:13:49.638 fused_ordering(653) 00:13:49.638 fused_ordering(654) 00:13:49.638 fused_ordering(655) 00:13:49.638 fused_ordering(656) 00:13:49.638 fused_ordering(657) 00:13:49.638 fused_ordering(658) 00:13:49.638 fused_ordering(659) 00:13:49.638 fused_ordering(660) 00:13:49.638 fused_ordering(661) 00:13:49.638 fused_ordering(662) 00:13:49.638 fused_ordering(663) 00:13:49.638 fused_ordering(664) 00:13:49.638 fused_ordering(665) 00:13:49.638 fused_ordering(666) 00:13:49.638 fused_ordering(667) 00:13:49.638 fused_ordering(668) 00:13:49.638 fused_ordering(669) 00:13:49.638 fused_ordering(670) 00:13:49.638 fused_ordering(671) 00:13:49.638 fused_ordering(672) 00:13:49.638 fused_ordering(673) 00:13:49.638 fused_ordering(674) 00:13:49.638 fused_ordering(675) 00:13:49.638 fused_ordering(676) 00:13:49.638 fused_ordering(677) 00:13:49.638 fused_ordering(678) 00:13:49.638 fused_ordering(679) 00:13:49.638 fused_ordering(680) 00:13:49.638 fused_ordering(681) 00:13:49.638 fused_ordering(682) 00:13:49.638 fused_ordering(683) 00:13:49.638 fused_ordering(684) 00:13:49.638 fused_ordering(685) 00:13:49.638 fused_ordering(686) 00:13:49.638 fused_ordering(687) 00:13:49.638 fused_ordering(688) 00:13:49.638 fused_ordering(689) 00:13:49.638 fused_ordering(690) 00:13:49.638 fused_ordering(691) 00:13:49.638 fused_ordering(692) 00:13:49.638 fused_ordering(693) 00:13:49.638 fused_ordering(694) 00:13:49.638 fused_ordering(695) 00:13:49.638 fused_ordering(696) 00:13:49.638 fused_ordering(697) 00:13:49.638 fused_ordering(698) 00:13:49.638 fused_ordering(699) 00:13:49.638 fused_ordering(700) 00:13:49.638 fused_ordering(701) 00:13:49.638 fused_ordering(702) 00:13:49.638 fused_ordering(703) 00:13:49.638 fused_ordering(704) 00:13:49.638 fused_ordering(705) 00:13:49.638 fused_ordering(706) 00:13:49.638 fused_ordering(707) 00:13:49.638 fused_ordering(708) 00:13:49.638 fused_ordering(709) 00:13:49.638 fused_ordering(710) 00:13:49.638 fused_ordering(711) 00:13:49.638 fused_ordering(712) 00:13:49.638 fused_ordering(713) 00:13:49.638 fused_ordering(714) 00:13:49.638 fused_ordering(715) 00:13:49.638 fused_ordering(716) 00:13:49.638 fused_ordering(717) 00:13:49.638 fused_ordering(718) 00:13:49.638 fused_ordering(719) 00:13:49.638 fused_ordering(720) 00:13:49.638 fused_ordering(721) 00:13:49.638 fused_ordering(722) 00:13:49.638 fused_ordering(723) 00:13:49.638 fused_ordering(724) 00:13:49.638 fused_ordering(725) 00:13:49.638 fused_ordering(726) 00:13:49.638 fused_ordering(727) 00:13:49.638 fused_ordering(728) 00:13:49.638 fused_ordering(729) 00:13:49.638 fused_ordering(730) 00:13:49.638 fused_ordering(731) 00:13:49.638 fused_ordering(732) 00:13:49.638 fused_ordering(733) 00:13:49.638 fused_ordering(734) 00:13:49.638 fused_ordering(735) 00:13:49.638 fused_ordering(736) 00:13:49.638 fused_ordering(737) 00:13:49.638 fused_ordering(738) 00:13:49.638 fused_ordering(739) 00:13:49.638 fused_ordering(740) 00:13:49.638 fused_ordering(741) 00:13:49.638 fused_ordering(742) 00:13:49.638 fused_ordering(743) 00:13:49.638 fused_ordering(744) 00:13:49.638 fused_ordering(745) 00:13:49.638 fused_ordering(746) 00:13:49.638 fused_ordering(747) 00:13:49.638 fused_ordering(748) 00:13:49.638 fused_ordering(749) 00:13:49.638 fused_ordering(750) 00:13:49.638 fused_ordering(751) 00:13:49.638 fused_ordering(752) 00:13:49.638 fused_ordering(753) 00:13:49.638 fused_ordering(754) 00:13:49.638 fused_ordering(755) 00:13:49.638 fused_ordering(756) 00:13:49.638 fused_ordering(757) 00:13:49.638 fused_ordering(758) 00:13:49.638 fused_ordering(759) 00:13:49.638 fused_ordering(760) 00:13:49.638 fused_ordering(761) 00:13:49.638 fused_ordering(762) 00:13:49.638 fused_ordering(763) 00:13:49.638 fused_ordering(764) 00:13:49.638 fused_ordering(765) 00:13:49.638 fused_ordering(766) 00:13:49.638 fused_ordering(767) 00:13:49.638 fused_ordering(768) 00:13:49.638 fused_ordering(769) 00:13:49.638 fused_ordering(770) 00:13:49.638 fused_ordering(771) 00:13:49.638 fused_ordering(772) 00:13:49.638 fused_ordering(773) 00:13:49.638 fused_ordering(774) 00:13:49.638 fused_ordering(775) 00:13:49.638 fused_ordering(776) 00:13:49.638 fused_ordering(777) 00:13:49.638 fused_ordering(778) 00:13:49.638 fused_ordering(779) 00:13:49.638 fused_ordering(780) 00:13:49.638 fused_ordering(781) 00:13:49.638 fused_ordering(782) 00:13:49.638 fused_ordering(783) 00:13:49.638 fused_ordering(784) 00:13:49.638 fused_ordering(785) 00:13:49.638 fused_ordering(786) 00:13:49.638 fused_ordering(787) 00:13:49.638 fused_ordering(788) 00:13:49.638 fused_ordering(789) 00:13:49.638 fused_ordering(790) 00:13:49.638 fused_ordering(791) 00:13:49.638 fused_ordering(792) 00:13:49.638 fused_ordering(793) 00:13:49.638 fused_ordering(794) 00:13:49.638 fused_ordering(795) 00:13:49.638 fused_ordering(796) 00:13:49.638 fused_ordering(797) 00:13:49.638 fused_ordering(798) 00:13:49.638 fused_ordering(799) 00:13:49.638 fused_ordering(800) 00:13:49.638 fused_ordering(801) 00:13:49.638 fused_ordering(802) 00:13:49.638 fused_ordering(803) 00:13:49.638 fused_ordering(804) 00:13:49.638 fused_ordering(805) 00:13:49.638 fused_ordering(806) 00:13:49.638 fused_ordering(807) 00:13:49.638 fused_ordering(808) 00:13:49.638 fused_ordering(809) 00:13:49.638 fused_ordering(810) 00:13:49.638 fused_ordering(811) 00:13:49.638 fused_ordering(812) 00:13:49.638 fused_ordering(813) 00:13:49.638 fused_ordering(814) 00:13:49.638 fused_ordering(815) 00:13:49.638 fused_ordering(816) 00:13:49.638 fused_ordering(817) 00:13:49.638 fused_ordering(818) 00:13:49.638 fused_ordering(819) 00:13:49.638 fused_ordering(820) 00:13:50.205 fused_ordering(821) 00:13:50.205 fused_ordering(822) 00:13:50.205 fused_ordering(823) 00:13:50.205 fused_ordering(824) 00:13:50.205 fused_ordering(825) 00:13:50.205 fused_ordering(826) 00:13:50.205 fused_ordering(827) 00:13:50.205 fused_ordering(828) 00:13:50.205 fused_ordering(829) 00:13:50.205 fused_ordering(830) 00:13:50.205 fused_ordering(831) 00:13:50.205 fused_ordering(832) 00:13:50.205 fused_ordering(833) 00:13:50.205 fused_ordering(834) 00:13:50.205 fused_ordering(835) 00:13:50.205 fused_ordering(836) 00:13:50.205 fused_ordering(837) 00:13:50.205 fused_ordering(838) 00:13:50.205 fused_ordering(839) 00:13:50.205 fused_ordering(840) 00:13:50.205 fused_ordering(841) 00:13:50.205 fused_ordering(842) 00:13:50.205 fused_ordering(843) 00:13:50.205 fused_ordering(844) 00:13:50.205 fused_ordering(845) 00:13:50.205 fused_ordering(846) 00:13:50.205 fused_ordering(847) 00:13:50.205 fused_ordering(848) 00:13:50.205 fused_ordering(849) 00:13:50.205 fused_ordering(850) 00:13:50.205 fused_ordering(851) 00:13:50.205 fused_ordering(852) 00:13:50.205 fused_ordering(853) 00:13:50.205 fused_ordering(854) 00:13:50.205 fused_ordering(855) 00:13:50.205 fused_ordering(856) 00:13:50.205 fused_ordering(857) 00:13:50.205 fused_ordering(858) 00:13:50.205 fused_ordering(859) 00:13:50.205 fused_ordering(860) 00:13:50.205 fused_ordering(861) 00:13:50.205 fused_ordering(862) 00:13:50.205 fused_ordering(863) 00:13:50.205 fused_ordering(864) 00:13:50.205 fused_ordering(865) 00:13:50.205 fused_ordering(866) 00:13:50.205 fused_ordering(867) 00:13:50.205 fused_ordering(868) 00:13:50.205 fused_ordering(869) 00:13:50.205 fused_ordering(870) 00:13:50.205 fused_ordering(871) 00:13:50.205 fused_ordering(872) 00:13:50.205 fused_ordering(873) 00:13:50.205 fused_ordering(874) 00:13:50.205 fused_ordering(875) 00:13:50.205 fused_ordering(876) 00:13:50.205 fused_ordering(877) 00:13:50.205 fused_ordering(878) 00:13:50.205 fused_ordering(879) 00:13:50.205 fused_ordering(880) 00:13:50.205 fused_ordering(881) 00:13:50.205 fused_ordering(882) 00:13:50.205 fused_ordering(883) 00:13:50.205 fused_ordering(884) 00:13:50.205 fused_ordering(885) 00:13:50.205 fused_ordering(886) 00:13:50.205 fused_ordering(887) 00:13:50.205 fused_ordering(888) 00:13:50.205 fused_ordering(889) 00:13:50.205 fused_ordering(890) 00:13:50.205 fused_ordering(891) 00:13:50.205 fused_ordering(892) 00:13:50.205 fused_ordering(893) 00:13:50.205 fused_ordering(894) 00:13:50.205 fused_ordering(895) 00:13:50.205 fused_ordering(896) 00:13:50.205 fused_ordering(897) 00:13:50.205 fused_ordering(898) 00:13:50.205 fused_ordering(899) 00:13:50.205 fused_ordering(900) 00:13:50.205 fused_ordering(901) 00:13:50.205 fused_ordering(902) 00:13:50.205 fused_ordering(903) 00:13:50.205 fused_ordering(904) 00:13:50.205 fused_ordering(905) 00:13:50.205 fused_ordering(906) 00:13:50.205 fused_ordering(907) 00:13:50.205 fused_ordering(908) 00:13:50.205 fused_ordering(909) 00:13:50.205 fused_ordering(910) 00:13:50.205 fused_ordering(911) 00:13:50.205 fused_ordering(912) 00:13:50.205 fused_ordering(913) 00:13:50.205 fused_ordering(914) 00:13:50.205 fused_ordering(915) 00:13:50.205 fused_ordering(916) 00:13:50.205 fused_ordering(917) 00:13:50.205 fused_ordering(918) 00:13:50.205 fused_ordering(919) 00:13:50.205 fused_ordering(920) 00:13:50.205 fused_ordering(921) 00:13:50.205 fused_ordering(922) 00:13:50.205 fused_ordering(923) 00:13:50.205 fused_ordering(924) 00:13:50.205 fused_ordering(925) 00:13:50.205 fused_ordering(926) 00:13:50.205 fused_ordering(927) 00:13:50.205 fused_ordering(928) 00:13:50.205 fused_ordering(929) 00:13:50.205 fused_ordering(930) 00:13:50.205 fused_ordering(931) 00:13:50.205 fused_ordering(932) 00:13:50.205 fused_ordering(933) 00:13:50.205 fused_ordering(934) 00:13:50.205 fused_ordering(935) 00:13:50.205 fused_ordering(936) 00:13:50.205 fused_ordering(937) 00:13:50.205 fused_ordering(938) 00:13:50.205 fused_ordering(939) 00:13:50.205 fused_ordering(940) 00:13:50.205 fused_ordering(941) 00:13:50.205 fused_ordering(942) 00:13:50.205 fused_ordering(943) 00:13:50.205 fused_ordering(944) 00:13:50.205 fused_ordering(945) 00:13:50.205 fused_ordering(946) 00:13:50.205 fused_ordering(947) 00:13:50.206 fused_ordering(948) 00:13:50.206 fused_ordering(949) 00:13:50.206 fused_ordering(950) 00:13:50.206 fused_ordering(951) 00:13:50.206 fused_ordering(952) 00:13:50.206 fused_ordering(953) 00:13:50.206 fused_ordering(954) 00:13:50.206 fused_ordering(955) 00:13:50.206 fused_ordering(956) 00:13:50.206 fused_ordering(957) 00:13:50.206 fused_ordering(958) 00:13:50.206 fused_ordering(959) 00:13:50.206 fused_ordering(960) 00:13:50.206 fused_ordering(961) 00:13:50.206 fused_ordering(962) 00:13:50.206 fused_ordering(963) 00:13:50.206 fused_ordering(964) 00:13:50.206 fused_ordering(965) 00:13:50.206 fused_ordering(966) 00:13:50.206 fused_ordering(967) 00:13:50.206 fused_ordering(968) 00:13:50.206 fused_ordering(969) 00:13:50.206 fused_ordering(970) 00:13:50.206 fused_ordering(971) 00:13:50.206 fused_ordering(972) 00:13:50.206 fused_ordering(973) 00:13:50.206 fused_ordering(974) 00:13:50.206 fused_ordering(975) 00:13:50.206 fused_ordering(976) 00:13:50.206 fused_ordering(977) 00:13:50.206 fused_ordering(978) 00:13:50.206 fused_ordering(979) 00:13:50.206 fused_ordering(980) 00:13:50.206 fused_ordering(981) 00:13:50.206 fused_ordering(982) 00:13:50.206 fused_ordering(983) 00:13:50.206 fused_ordering(984) 00:13:50.206 fused_ordering(985) 00:13:50.206 fused_ordering(986) 00:13:50.206 fused_ordering(987) 00:13:50.206 fused_ordering(988) 00:13:50.206 fused_ordering(989) 00:13:50.206 fused_ordering(990) 00:13:50.206 fused_ordering(991) 00:13:50.206 fused_ordering(992) 00:13:50.206 fused_ordering(993) 00:13:50.206 fused_ordering(994) 00:13:50.206 fused_ordering(995) 00:13:50.206 fused_ordering(996) 00:13:50.206 fused_ordering(997) 00:13:50.206 fused_ordering(998) 00:13:50.206 fused_ordering(999) 00:13:50.206 fused_ordering(1000) 00:13:50.206 fused_ordering(1001) 00:13:50.206 fused_ordering(1002) 00:13:50.206 fused_ordering(1003) 00:13:50.206 fused_ordering(1004) 00:13:50.206 fused_ordering(1005) 00:13:50.206 fused_ordering(1006) 00:13:50.206 fused_ordering(1007) 00:13:50.206 fused_ordering(1008) 00:13:50.206 fused_ordering(1009) 00:13:50.206 fused_ordering(1010) 00:13:50.206 fused_ordering(1011) 00:13:50.206 fused_ordering(1012) 00:13:50.206 fused_ordering(1013) 00:13:50.206 fused_ordering(1014) 00:13:50.206 fused_ordering(1015) 00:13:50.206 fused_ordering(1016) 00:13:50.206 fused_ordering(1017) 00:13:50.206 fused_ordering(1018) 00:13:50.206 fused_ordering(1019) 00:13:50.206 fused_ordering(1020) 00:13:50.206 fused_ordering(1021) 00:13:50.206 fused_ordering(1022) 00:13:50.206 fused_ordering(1023) 00:13:50.206 10:06:20 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:50.206 10:06:20 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:50.206 10:06:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:50.206 10:06:20 -- nvmf/common.sh@116 -- # sync 00:13:50.206 10:06:20 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:50.206 10:06:20 -- nvmf/common.sh@119 -- # set +e 00:13:50.206 10:06:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:50.206 10:06:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:50.206 rmmod nvme_tcp 00:13:50.206 rmmod nvme_fabrics 00:13:50.206 rmmod nvme_keyring 00:13:50.206 10:06:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:50.206 10:06:20 -- nvmf/common.sh@123 -- # set -e 00:13:50.206 10:06:20 -- nvmf/common.sh@124 -- # return 0 00:13:50.206 10:06:20 -- nvmf/common.sh@477 -- # '[' -n 81785 ']' 00:13:50.206 10:06:20 -- nvmf/common.sh@478 -- # killprocess 81785 00:13:50.206 10:06:20 -- common/autotest_common.sh@926 -- # '[' -z 81785 ']' 00:13:50.206 10:06:20 -- common/autotest_common.sh@930 -- # kill -0 81785 00:13:50.206 10:06:20 -- common/autotest_common.sh@931 -- # uname 00:13:50.206 10:06:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:50.206 10:06:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81785 00:13:50.464 10:06:20 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:13:50.464 10:06:20 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:13:50.464 killing process with pid 81785 00:13:50.464 10:06:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81785' 00:13:50.464 10:06:20 -- common/autotest_common.sh@945 -- # kill 81785 00:13:50.464 10:06:20 -- common/autotest_common.sh@950 -- # wait 81785 00:13:50.464 10:06:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:50.464 10:06:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:50.464 10:06:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:50.464 10:06:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:50.464 10:06:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:50.464 10:06:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.464 10:06:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.464 10:06:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.464 10:06:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:50.464 00:13:50.464 real 0m3.670s 00:13:50.464 user 0m4.314s 00:13:50.464 sys 0m1.269s 00:13:50.464 10:06:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:50.464 10:06:20 -- common/autotest_common.sh@10 -- # set +x 00:13:50.464 ************************************ 00:13:50.464 END TEST nvmf_fused_ordering 00:13:50.464 ************************************ 00:13:50.723 10:06:21 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:50.723 10:06:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:50.723 10:06:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:50.723 10:06:21 -- common/autotest_common.sh@10 -- # set +x 00:13:50.723 ************************************ 00:13:50.723 START TEST nvmf_delete_subsystem 00:13:50.723 ************************************ 00:13:50.723 10:06:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:13:50.723 * Looking for test storage... 00:13:50.723 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:50.723 10:06:21 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:50.723 10:06:21 -- nvmf/common.sh@7 -- # uname -s 00:13:50.723 10:06:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:50.723 10:06:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:50.723 10:06:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:50.723 10:06:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:50.723 10:06:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:50.723 10:06:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:50.723 10:06:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:50.723 10:06:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:50.723 10:06:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:50.723 10:06:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:50.723 10:06:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:13:50.723 10:06:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:13:50.723 10:06:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:50.723 10:06:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:50.723 10:06:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:50.723 10:06:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:50.723 10:06:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:50.723 10:06:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:50.723 10:06:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:50.723 10:06:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.724 10:06:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.724 10:06:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.724 10:06:21 -- paths/export.sh@5 -- # export PATH 00:13:50.724 10:06:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:50.724 10:06:21 -- nvmf/common.sh@46 -- # : 0 00:13:50.724 10:06:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:50.724 10:06:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:50.724 10:06:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:50.724 10:06:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:50.724 10:06:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:50.724 10:06:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:50.724 10:06:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:50.724 10:06:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:50.724 10:06:21 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:13:50.724 10:06:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:50.724 10:06:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:50.724 10:06:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:50.724 10:06:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:50.724 10:06:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:50.724 10:06:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:50.724 10:06:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:50.724 10:06:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:50.724 10:06:21 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:50.724 10:06:21 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:50.724 10:06:21 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:50.724 10:06:21 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:50.724 10:06:21 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:50.724 10:06:21 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:50.724 10:06:21 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:50.724 10:06:21 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:50.724 10:06:21 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:50.724 10:06:21 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:50.724 10:06:21 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:50.724 10:06:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:50.724 10:06:21 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:50.724 10:06:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:50.724 10:06:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:50.724 10:06:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:50.724 10:06:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:50.724 10:06:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:50.724 10:06:21 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:50.724 10:06:21 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:50.724 Cannot find device "nvmf_tgt_br" 00:13:50.724 10:06:21 -- nvmf/common.sh@154 -- # true 00:13:50.724 10:06:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:50.724 Cannot find device "nvmf_tgt_br2" 00:13:50.724 10:06:21 -- nvmf/common.sh@155 -- # true 00:13:50.724 10:06:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:50.724 10:06:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:50.724 Cannot find device "nvmf_tgt_br" 00:13:50.724 10:06:21 -- nvmf/common.sh@157 -- # true 00:13:50.724 10:06:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:50.724 Cannot find device "nvmf_tgt_br2" 00:13:50.724 10:06:21 -- nvmf/common.sh@158 -- # true 00:13:50.724 10:06:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:50.724 10:06:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:50.724 10:06:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:50.724 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:50.724 10:06:21 -- nvmf/common.sh@161 -- # true 00:13:50.724 10:06:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:50.983 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:50.983 10:06:21 -- nvmf/common.sh@162 -- # true 00:13:50.983 10:06:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:50.983 10:06:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:50.983 10:06:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:50.983 10:06:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:50.983 10:06:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:50.983 10:06:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:50.983 10:06:21 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:50.983 10:06:21 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:50.983 10:06:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:50.983 10:06:21 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:50.983 10:06:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:50.983 10:06:21 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:50.983 10:06:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:50.983 10:06:21 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:50.983 10:06:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:50.983 10:06:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:50.983 10:06:21 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:50.983 10:06:21 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:50.983 10:06:21 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:50.983 10:06:21 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:50.983 10:06:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:50.983 10:06:21 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:50.983 10:06:21 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:50.983 10:06:21 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:50.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:50.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:13:50.983 00:13:50.983 --- 10.0.0.2 ping statistics --- 00:13:50.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.983 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:13:50.983 10:06:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:50.983 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:50.983 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:13:50.983 00:13:50.983 --- 10.0.0.3 ping statistics --- 00:13:50.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.983 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:13:50.983 10:06:21 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:50.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:50.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:13:50.983 00:13:50.983 --- 10.0.0.1 ping statistics --- 00:13:50.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:50.983 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:13:50.983 10:06:21 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:50.983 10:06:21 -- nvmf/common.sh@421 -- # return 0 00:13:50.983 10:06:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:50.983 10:06:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:50.983 10:06:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:50.983 10:06:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:50.983 10:06:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:50.983 10:06:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:50.983 10:06:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:50.983 10:06:21 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:13:50.983 10:06:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:50.983 10:06:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:50.983 10:06:21 -- common/autotest_common.sh@10 -- # set +x 00:13:50.983 10:06:21 -- nvmf/common.sh@469 -- # nvmfpid=82026 00:13:50.983 10:06:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:13:50.983 10:06:21 -- nvmf/common.sh@470 -- # waitforlisten 82026 00:13:50.983 10:06:21 -- common/autotest_common.sh@819 -- # '[' -z 82026 ']' 00:13:50.983 10:06:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:50.983 10:06:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:50.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:50.983 10:06:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:50.983 10:06:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:50.983 10:06:21 -- common/autotest_common.sh@10 -- # set +x 00:13:51.241 [2024-07-14 10:06:21.537922] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:13:51.241 [2024-07-14 10:06:21.538016] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:51.241 [2024-07-14 10:06:21.663409] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:13:51.241 [2024-07-14 10:06:21.720058] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:51.242 [2024-07-14 10:06:21.720245] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:51.242 [2024-07-14 10:06:21.720259] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:51.242 [2024-07-14 10:06:21.720268] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:51.242 [2024-07-14 10:06:21.720412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:51.242 [2024-07-14 10:06:21.720421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.175 10:06:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:52.175 10:06:22 -- common/autotest_common.sh@852 -- # return 0 00:13:52.175 10:06:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:52.175 10:06:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:52.175 10:06:22 -- common/autotest_common.sh@10 -- # set +x 00:13:52.175 10:06:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:52.175 10:06:22 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:52.175 10:06:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:52.175 10:06:22 -- common/autotest_common.sh@10 -- # set +x 00:13:52.175 [2024-07-14 10:06:22.575904] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:52.175 10:06:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:52.175 10:06:22 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:52.175 10:06:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:52.175 10:06:22 -- common/autotest_common.sh@10 -- # set +x 00:13:52.175 10:06:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:52.175 10:06:22 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:52.175 10:06:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:52.175 10:06:22 -- common/autotest_common.sh@10 -- # set +x 00:13:52.175 [2024-07-14 10:06:22.592018] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:52.175 10:06:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:52.175 10:06:22 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:52.175 10:06:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:52.175 10:06:22 -- common/autotest_common.sh@10 -- # set +x 00:13:52.175 NULL1 00:13:52.176 10:06:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:52.176 10:06:22 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:52.176 10:06:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:52.176 10:06:22 -- common/autotest_common.sh@10 -- # set +x 00:13:52.176 Delay0 00:13:52.176 10:06:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:52.176 10:06:22 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.176 10:06:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:52.176 10:06:22 -- common/autotest_common.sh@10 -- # set +x 00:13:52.176 10:06:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:52.176 10:06:22 -- target/delete_subsystem.sh@28 -- # perf_pid=82077 00:13:52.176 10:06:22 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:52.176 10:06:22 -- target/delete_subsystem.sh@30 -- # sleep 2 00:13:52.434 [2024-07-14 10:06:22.786479] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:54.336 10:06:24 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:54.336 10:06:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:54.336 10:06:24 -- common/autotest_common.sh@10 -- # set +x 00:13:54.336 Write completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 starting I/O failed: -6 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Write completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 starting I/O failed: -6 00:13:54.336 Write completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 starting I/O failed: -6 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Write completed with error (sct=0, sc=8) 00:13:54.336 starting I/O failed: -6 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Write completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Write completed with error (sct=0, sc=8) 00:13:54.336 starting I/O failed: -6 00:13:54.336 Write completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 starting I/O failed: -6 00:13:54.336 Write completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Write completed with error (sct=0, sc=8) 00:13:54.336 starting I/O failed: -6 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Write completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 starting I/O failed: -6 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 starting I/O failed: -6 00:13:54.336 Write completed with error (sct=0, sc=8) 00:13:54.336 Write completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 starting I/O failed: -6 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 starting I/O failed: -6 00:13:54.336 Write completed with error (sct=0, sc=8) 00:13:54.336 Write completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Write completed with error (sct=0, sc=8) 00:13:54.336 starting I/O failed: -6 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 [2024-07-14 10:06:24.820426] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178b4e0 is same with the state(5) to be set 00:13:54.336 Write completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Write completed with error (sct=0, sc=8) 00:13:54.336 Write completed with error (sct=0, sc=8) 00:13:54.336 Write completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Write completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Write completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Write completed with error (sct=0, sc=8) 00:13:54.336 Write completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Write completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Write completed with error (sct=0, sc=8) 00:13:54.336 Write completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.336 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 [2024-07-14 10:06:24.821761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4f44000c00 is same with the state(5) to be set 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 Write completed with error (sct=0, sc=8) 00:13:54.337 starting I/O failed: -6 00:13:54.337 Read completed with error (sct=0, sc=8) 00:13:54.337 [2024-07-14 10:06:24.822806] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4f4400c350 is same with the state(5) to be set 00:13:55.271 [2024-07-14 10:06:25.799460] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178e460 is same with the state(5) to be set 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 [2024-07-14 10:06:25.820213] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178b790 is same with the state(5) to be set 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 [2024-07-14 10:06:25.820808] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4f4400bf20 is same with the state(5) to be set 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 [2024-07-14 10:06:25.821770] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4f4400c600 is same with the state(5) to be set 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Write completed with error (sct=0, sc=8) 00:13:55.530 Read completed with error (sct=0, sc=8) 00:13:55.531 Write completed with error (sct=0, sc=8) 00:13:55.531 Read completed with error (sct=0, sc=8) 00:13:55.531 [2024-07-14 10:06:25.822696] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x178b230 is same with the state(5) to be set 00:13:55.531 [2024-07-14 10:06:25.823282] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x178e460 (9): Bad file descriptor 00:13:55.531 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:13:55.531 10:06:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:55.531 10:06:25 -- target/delete_subsystem.sh@34 -- # delay=0 00:13:55.531 10:06:25 -- target/delete_subsystem.sh@35 -- # kill -0 82077 00:13:55.531 10:06:25 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:13:55.531 Initializing NVMe Controllers 00:13:55.531 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:55.531 Controller IO queue size 128, less than required. 00:13:55.531 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:55.531 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:55.531 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:55.531 Initialization complete. Launching workers. 00:13:55.531 ======================================================== 00:13:55.531 Latency(us) 00:13:55.531 Device Information : IOPS MiB/s Average min max 00:13:55.531 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.93 0.09 880891.91 371.17 1010251.66 00:13:55.531 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 185.88 0.09 899966.03 534.90 1011683.38 00:13:55.531 ======================================================== 00:13:55.531 Total : 362.81 0.18 890664.13 371.17 1011683.38 00:13:55.531 00:13:55.790 10:06:26 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:13:55.790 10:06:26 -- target/delete_subsystem.sh@35 -- # kill -0 82077 00:13:55.790 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (82077) - No such process 00:13:55.790 10:06:26 -- target/delete_subsystem.sh@45 -- # NOT wait 82077 00:13:55.790 10:06:26 -- common/autotest_common.sh@640 -- # local es=0 00:13:55.790 10:06:26 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 82077 00:13:55.790 10:06:26 -- common/autotest_common.sh@628 -- # local arg=wait 00:13:55.790 10:06:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:55.790 10:06:26 -- common/autotest_common.sh@632 -- # type -t wait 00:13:55.790 10:06:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:55.790 10:06:26 -- common/autotest_common.sh@643 -- # wait 82077 00:13:55.790 10:06:26 -- common/autotest_common.sh@643 -- # es=1 00:13:55.790 10:06:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:55.790 10:06:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:56.049 10:06:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:56.049 10:06:26 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:56.049 10:06:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:56.049 10:06:26 -- common/autotest_common.sh@10 -- # set +x 00:13:56.049 10:06:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:56.049 10:06:26 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:56.049 10:06:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:56.049 10:06:26 -- common/autotest_common.sh@10 -- # set +x 00:13:56.049 [2024-07-14 10:06:26.352890] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:56.049 10:06:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:56.049 10:06:26 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.049 10:06:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:56.049 10:06:26 -- common/autotest_common.sh@10 -- # set +x 00:13:56.049 10:06:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:56.049 10:06:26 -- target/delete_subsystem.sh@54 -- # perf_pid=82123 00:13:56.049 10:06:26 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:13:56.049 10:06:26 -- target/delete_subsystem.sh@56 -- # delay=0 00:13:56.049 10:06:26 -- target/delete_subsystem.sh@57 -- # kill -0 82123 00:13:56.049 10:06:26 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:56.049 [2024-07-14 10:06:26.519005] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:13:56.615 10:06:26 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:56.615 10:06:26 -- target/delete_subsystem.sh@57 -- # kill -0 82123 00:13:56.615 10:06:26 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:56.874 10:06:27 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:56.874 10:06:27 -- target/delete_subsystem.sh@57 -- # kill -0 82123 00:13:56.874 10:06:27 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:57.440 10:06:27 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:57.440 10:06:27 -- target/delete_subsystem.sh@57 -- # kill -0 82123 00:13:57.440 10:06:27 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:58.008 10:06:28 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:58.008 10:06:28 -- target/delete_subsystem.sh@57 -- # kill -0 82123 00:13:58.008 10:06:28 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:58.576 10:06:28 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:58.576 10:06:28 -- target/delete_subsystem.sh@57 -- # kill -0 82123 00:13:58.576 10:06:28 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:59.141 10:06:29 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:59.141 10:06:29 -- target/delete_subsystem.sh@57 -- # kill -0 82123 00:13:59.141 10:06:29 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:13:59.141 Initializing NVMe Controllers 00:13:59.141 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:59.141 Controller IO queue size 128, less than required. 00:13:59.141 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:59.141 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:13:59.141 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:13:59.141 Initialization complete. Launching workers. 00:13:59.141 ======================================================== 00:13:59.141 Latency(us) 00:13:59.141 Device Information : IOPS MiB/s Average min max 00:13:59.141 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003076.27 1000121.64 1043210.39 00:13:59.141 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005415.44 1000209.49 1011747.30 00:13:59.141 ======================================================== 00:13:59.141 Total : 256.00 0.12 1004245.86 1000121.64 1043210.39 00:13:59.141 00:13:59.399 10:06:29 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:13:59.399 10:06:29 -- target/delete_subsystem.sh@57 -- # kill -0 82123 00:13:59.399 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (82123) - No such process 00:13:59.399 10:06:29 -- target/delete_subsystem.sh@67 -- # wait 82123 00:13:59.399 10:06:29 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:13:59.399 10:06:29 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:13:59.399 10:06:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:59.399 10:06:29 -- nvmf/common.sh@116 -- # sync 00:13:59.399 10:06:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:59.399 10:06:29 -- nvmf/common.sh@119 -- # set +e 00:13:59.399 10:06:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:59.399 10:06:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:59.399 rmmod nvme_tcp 00:13:59.657 rmmod nvme_fabrics 00:13:59.657 rmmod nvme_keyring 00:13:59.657 10:06:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:59.657 10:06:29 -- nvmf/common.sh@123 -- # set -e 00:13:59.657 10:06:29 -- nvmf/common.sh@124 -- # return 0 00:13:59.657 10:06:29 -- nvmf/common.sh@477 -- # '[' -n 82026 ']' 00:13:59.657 10:06:29 -- nvmf/common.sh@478 -- # killprocess 82026 00:13:59.657 10:06:29 -- common/autotest_common.sh@926 -- # '[' -z 82026 ']' 00:13:59.657 10:06:29 -- common/autotest_common.sh@930 -- # kill -0 82026 00:13:59.657 10:06:29 -- common/autotest_common.sh@931 -- # uname 00:13:59.657 10:06:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:59.657 10:06:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82026 00:13:59.657 killing process with pid 82026 00:13:59.657 10:06:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:59.657 10:06:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:59.657 10:06:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82026' 00:13:59.657 10:06:30 -- common/autotest_common.sh@945 -- # kill 82026 00:13:59.657 10:06:30 -- common/autotest_common.sh@950 -- # wait 82026 00:13:59.916 10:06:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:59.916 10:06:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:59.916 10:06:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:59.916 10:06:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:59.916 10:06:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:59.916 10:06:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.916 10:06:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:59.916 10:06:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.916 10:06:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:59.916 00:13:59.916 real 0m9.230s 00:13:59.916 user 0m28.724s 00:13:59.916 sys 0m1.469s 00:13:59.916 10:06:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:59.916 10:06:30 -- common/autotest_common.sh@10 -- # set +x 00:13:59.916 ************************************ 00:13:59.916 END TEST nvmf_delete_subsystem 00:13:59.916 ************************************ 00:13:59.916 10:06:30 -- nvmf/nvmf.sh@36 -- # [[ 0 -eq 1 ]] 00:13:59.916 10:06:30 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:13:59.916 10:06:30 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:59.916 10:06:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:59.916 10:06:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:59.916 10:06:30 -- common/autotest_common.sh@10 -- # set +x 00:13:59.916 ************************************ 00:13:59.916 START TEST nvmf_host_management 00:13:59.916 ************************************ 00:13:59.916 10:06:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:13:59.916 * Looking for test storage... 00:13:59.916 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:59.916 10:06:30 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:59.916 10:06:30 -- nvmf/common.sh@7 -- # uname -s 00:13:59.916 10:06:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:59.916 10:06:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:59.916 10:06:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:59.916 10:06:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:59.916 10:06:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:59.916 10:06:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:59.916 10:06:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:59.916 10:06:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:59.916 10:06:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:59.916 10:06:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:59.916 10:06:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:13:59.916 10:06:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:13:59.916 10:06:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:59.916 10:06:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:59.916 10:06:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:59.916 10:06:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:59.916 10:06:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:59.916 10:06:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:59.916 10:06:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:59.916 10:06:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.916 10:06:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.916 10:06:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.916 10:06:30 -- paths/export.sh@5 -- # export PATH 00:13:59.916 10:06:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.916 10:06:30 -- nvmf/common.sh@46 -- # : 0 00:13:59.916 10:06:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:59.916 10:06:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:59.916 10:06:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:59.916 10:06:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:59.916 10:06:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:59.916 10:06:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:59.916 10:06:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:59.916 10:06:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:59.916 10:06:30 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:59.916 10:06:30 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:59.916 10:06:30 -- target/host_management.sh@104 -- # nvmftestinit 00:13:59.916 10:06:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:59.916 10:06:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:59.916 10:06:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:59.916 10:06:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:59.916 10:06:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:59.916 10:06:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.916 10:06:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:59.916 10:06:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.916 10:06:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:59.916 10:06:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:59.916 10:06:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:59.916 10:06:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:59.916 10:06:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:59.916 10:06:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:59.916 10:06:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:59.916 10:06:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:59.916 10:06:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:59.916 10:06:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:59.916 10:06:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:59.916 10:06:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:59.916 10:06:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:59.916 10:06:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:59.916 10:06:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:59.916 10:06:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:59.916 10:06:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:59.916 10:06:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:59.916 10:06:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:59.916 10:06:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:59.916 Cannot find device "nvmf_tgt_br" 00:13:59.916 10:06:30 -- nvmf/common.sh@154 -- # true 00:13:59.916 10:06:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:00.175 Cannot find device "nvmf_tgt_br2" 00:14:00.175 10:06:30 -- nvmf/common.sh@155 -- # true 00:14:00.175 10:06:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:00.175 10:06:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:00.175 Cannot find device "nvmf_tgt_br" 00:14:00.175 10:06:30 -- nvmf/common.sh@157 -- # true 00:14:00.175 10:06:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:00.175 Cannot find device "nvmf_tgt_br2" 00:14:00.175 10:06:30 -- nvmf/common.sh@158 -- # true 00:14:00.175 10:06:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:00.175 10:06:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:00.175 10:06:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:00.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:00.175 10:06:30 -- nvmf/common.sh@161 -- # true 00:14:00.175 10:06:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:00.175 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:00.175 10:06:30 -- nvmf/common.sh@162 -- # true 00:14:00.175 10:06:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:00.175 10:06:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:00.175 10:06:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:00.175 10:06:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:00.175 10:06:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:00.175 10:06:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:00.175 10:06:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:00.175 10:06:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:00.175 10:06:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:00.175 10:06:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:00.175 10:06:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:00.175 10:06:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:00.176 10:06:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:00.176 10:06:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:00.176 10:06:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:00.176 10:06:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:00.176 10:06:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:00.176 10:06:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:00.176 10:06:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:00.176 10:06:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:00.176 10:06:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:00.434 10:06:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:00.434 10:06:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:00.434 10:06:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:00.434 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:00.434 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:14:00.434 00:14:00.434 --- 10.0.0.2 ping statistics --- 00:14:00.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.434 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:14:00.434 10:06:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:00.434 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:00.434 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:14:00.434 00:14:00.434 --- 10.0.0.3 ping statistics --- 00:14:00.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.434 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:14:00.434 10:06:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:00.434 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:00.434 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:14:00.434 00:14:00.434 --- 10.0.0.1 ping statistics --- 00:14:00.434 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.434 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:14:00.434 10:06:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:00.434 10:06:30 -- nvmf/common.sh@421 -- # return 0 00:14:00.434 10:06:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:00.434 10:06:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:00.434 10:06:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:00.434 10:06:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:00.434 10:06:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:00.434 10:06:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:00.434 10:06:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:00.434 10:06:30 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:14:00.434 10:06:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:00.434 10:06:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:00.434 10:06:30 -- common/autotest_common.sh@10 -- # set +x 00:14:00.434 ************************************ 00:14:00.434 START TEST nvmf_host_management 00:14:00.434 ************************************ 00:14:00.434 10:06:30 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:14:00.434 10:06:30 -- target/host_management.sh@69 -- # starttarget 00:14:00.434 10:06:30 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:00.434 10:06:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:00.434 10:06:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:00.434 10:06:30 -- common/autotest_common.sh@10 -- # set +x 00:14:00.434 10:06:30 -- nvmf/common.sh@469 -- # nvmfpid=82358 00:14:00.434 10:06:30 -- nvmf/common.sh@470 -- # waitforlisten 82358 00:14:00.434 10:06:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:00.435 10:06:30 -- common/autotest_common.sh@819 -- # '[' -z 82358 ']' 00:14:00.435 10:06:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.435 10:06:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:00.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.435 10:06:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.435 10:06:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:00.435 10:06:30 -- common/autotest_common.sh@10 -- # set +x 00:14:00.435 [2024-07-14 10:06:30.832634] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:00.435 [2024-07-14 10:06:30.832717] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.435 [2024-07-14 10:06:30.973476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:00.693 [2024-07-14 10:06:31.049539] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:00.693 [2024-07-14 10:06:31.049723] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.693 [2024-07-14 10:06:31.049735] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.693 [2024-07-14 10:06:31.049744] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.693 [2024-07-14 10:06:31.049857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:00.693 [2024-07-14 10:06:31.050748] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:00.693 [2024-07-14 10:06:31.050860] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:14:00.693 [2024-07-14 10:06:31.050864] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:01.628 10:06:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:01.628 10:06:31 -- common/autotest_common.sh@852 -- # return 0 00:14:01.628 10:06:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:01.628 10:06:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:01.628 10:06:31 -- common/autotest_common.sh@10 -- # set +x 00:14:01.628 10:06:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:01.628 10:06:31 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:01.628 10:06:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:01.628 10:06:31 -- common/autotest_common.sh@10 -- # set +x 00:14:01.628 [2024-07-14 10:06:31.863854] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:01.628 10:06:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:01.628 10:06:31 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:01.628 10:06:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:01.628 10:06:31 -- common/autotest_common.sh@10 -- # set +x 00:14:01.628 10:06:31 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:01.628 10:06:31 -- target/host_management.sh@23 -- # cat 00:14:01.628 10:06:31 -- target/host_management.sh@30 -- # rpc_cmd 00:14:01.628 10:06:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:01.628 10:06:31 -- common/autotest_common.sh@10 -- # set +x 00:14:01.628 Malloc0 00:14:01.628 [2024-07-14 10:06:31.948367] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:01.628 10:06:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:01.628 10:06:31 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:01.628 10:06:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:01.628 10:06:31 -- common/autotest_common.sh@10 -- # set +x 00:14:01.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:01.628 10:06:31 -- target/host_management.sh@73 -- # perfpid=82430 00:14:01.628 10:06:31 -- target/host_management.sh@74 -- # waitforlisten 82430 /var/tmp/bdevperf.sock 00:14:01.628 10:06:31 -- common/autotest_common.sh@819 -- # '[' -z 82430 ']' 00:14:01.628 10:06:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:01.628 10:06:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:01.628 10:06:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:01.628 10:06:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:01.628 10:06:32 -- common/autotest_common.sh@10 -- # set +x 00:14:01.628 10:06:32 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:01.628 10:06:32 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:01.628 10:06:32 -- nvmf/common.sh@520 -- # config=() 00:14:01.628 10:06:32 -- nvmf/common.sh@520 -- # local subsystem config 00:14:01.628 10:06:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:01.628 10:06:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:01.628 { 00:14:01.628 "params": { 00:14:01.628 "name": "Nvme$subsystem", 00:14:01.628 "trtype": "$TEST_TRANSPORT", 00:14:01.628 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:01.628 "adrfam": "ipv4", 00:14:01.628 "trsvcid": "$NVMF_PORT", 00:14:01.628 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:01.628 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:01.628 "hdgst": ${hdgst:-false}, 00:14:01.628 "ddgst": ${ddgst:-false} 00:14:01.628 }, 00:14:01.628 "method": "bdev_nvme_attach_controller" 00:14:01.628 } 00:14:01.628 EOF 00:14:01.628 )") 00:14:01.628 10:06:32 -- nvmf/common.sh@542 -- # cat 00:14:01.628 10:06:32 -- nvmf/common.sh@544 -- # jq . 00:14:01.628 10:06:32 -- nvmf/common.sh@545 -- # IFS=, 00:14:01.628 10:06:32 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:01.628 "params": { 00:14:01.628 "name": "Nvme0", 00:14:01.628 "trtype": "tcp", 00:14:01.628 "traddr": "10.0.0.2", 00:14:01.628 "adrfam": "ipv4", 00:14:01.628 "trsvcid": "4420", 00:14:01.628 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:01.628 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:01.628 "hdgst": false, 00:14:01.628 "ddgst": false 00:14:01.628 }, 00:14:01.628 "method": "bdev_nvme_attach_controller" 00:14:01.628 }' 00:14:01.628 [2024-07-14 10:06:32.054599] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:01.628 [2024-07-14 10:06:32.054686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82430 ] 00:14:01.887 [2024-07-14 10:06:32.197141] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.887 [2024-07-14 10:06:32.284400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:02.146 Running I/O for 10 seconds... 00:14:02.715 10:06:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:02.715 10:06:33 -- common/autotest_common.sh@852 -- # return 0 00:14:02.715 10:06:33 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:02.715 10:06:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:02.715 10:06:33 -- common/autotest_common.sh@10 -- # set +x 00:14:02.715 10:06:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:02.715 10:06:33 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:02.715 10:06:33 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:02.715 10:06:33 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:02.715 10:06:33 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:02.715 10:06:33 -- target/host_management.sh@52 -- # local ret=1 00:14:02.715 10:06:33 -- target/host_management.sh@53 -- # local i 00:14:02.715 10:06:33 -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:02.715 10:06:33 -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:02.715 10:06:33 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:02.715 10:06:33 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:02.715 10:06:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:02.715 10:06:33 -- common/autotest_common.sh@10 -- # set +x 00:14:02.715 10:06:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:02.715 10:06:33 -- target/host_management.sh@55 -- # read_io_count=2119 00:14:02.715 10:06:33 -- target/host_management.sh@58 -- # '[' 2119 -ge 100 ']' 00:14:02.715 10:06:33 -- target/host_management.sh@59 -- # ret=0 00:14:02.715 10:06:33 -- target/host_management.sh@60 -- # break 00:14:02.715 10:06:33 -- target/host_management.sh@64 -- # return 0 00:14:02.715 10:06:33 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:02.715 10:06:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:02.715 10:06:33 -- common/autotest_common.sh@10 -- # set +x 00:14:02.715 [2024-07-14 10:06:33.125546] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125636] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125664] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125673] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125682] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125691] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125699] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125707] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125733] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125742] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125750] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125758] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125767] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125775] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125783] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125790] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125799] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125823] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125831] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125847] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125871] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125912] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125921] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125929] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125937] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125953] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.715 [2024-07-14 10:06:33.125962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.716 [2024-07-14 10:06:33.125970] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.716 [2024-07-14 10:06:33.125978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.716 [2024-07-14 10:06:33.125985] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.716 [2024-07-14 10:06:33.125993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.716 [2024-07-14 10:06:33.126001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.716 [2024-07-14 10:06:33.126010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.716 [2024-07-14 10:06:33.126019] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.716 [2024-07-14 10:06:33.126027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.716 [2024-07-14 10:06:33.126036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.716 [2024-07-14 10:06:33.126044] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.716 [2024-07-14 10:06:33.126052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.716 [2024-07-14 10:06:33.126075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.716 [2024-07-14 10:06:33.126083] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.716 [2024-07-14 10:06:33.126091] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.716 [2024-07-14 10:06:33.126098] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.716 [2024-07-14 10:06:33.126106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.716 [2024-07-14 10:06:33.126114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.716 [2024-07-14 10:06:33.126122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.716 [2024-07-14 10:06:33.126130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8e4880 is same with the state(5) to be set 00:14:02.716 [2024-07-14 10:06:33.127825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.716 [2024-07-14 10:06:33.128120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.716 [2024-07-14 10:06:33.128246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.716 [2024-07-14 10:06:33.128328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.716 [2024-07-14 10:06:33.128398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.716 [2024-07-14 10:06:33.128464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.716 [2024-07-14 10:06:33.128527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.716 [2024-07-14 10:06:33.128609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.716 [2024-07-14 10:06:33.128661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.716 [2024-07-14 10:06:33.128721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.716 [2024-07-14 10:06:33.128781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.716 [2024-07-14 10:06:33.128851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.716 [2024-07-14 10:06:33.128911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.716 [2024-07-14 10:06:33.128971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.716 [2024-07-14 10:06:33.129023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.716 [2024-07-14 10:06:33.129083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.716 [2024-07-14 10:06:33.129145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.716 [2024-07-14 10:06:33.129202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.716 [2024-07-14 10:06:33.129311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.716 [2024-07-14 10:06:33.129384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.716 [2024-07-14 10:06:33.129446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.716 [2024-07-14 10:06:33.129520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.716 [2024-07-14 10:06:33.129580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.716 [2024-07-14 10:06:33.129662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.716 [2024-07-14 10:06:33.129723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.716 [2024-07-14 10:06:33.129786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.716 [2024-07-14 10:06:33.129845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.716 [2024-07-14 10:06:33.129906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.716 [2024-07-14 10:06:33.129966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.716 [2024-07-14 10:06:33.130031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.716 [2024-07-14 10:06:33.130105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.716 [2024-07-14 10:06:33.130171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.716 [2024-07-14 10:06:33.130279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.716 [2024-07-14 10:06:33.130354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.716 [2024-07-14 10:06:33.130434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.716 [2024-07-14 10:06:33.130487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.716 [2024-07-14 10:06:33.130565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.716 [2024-07-14 10:06:33.130633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.716 [2024-07-14 10:06:33.130699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.716 [2024-07-14 10:06:33.130753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.716 [2024-07-14 10:06:33.130837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.716 [2024-07-14 10:06:33.130902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.716 [2024-07-14 10:06:33.130961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.716 [2024-07-14 10:06:33.131023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.716 [2024-07-14 10:06:33.131081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.716 [2024-07-14 10:06:33.131147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.716 [2024-07-14 10:06:33.131198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.716 [2024-07-14 10:06:33.131295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.716 [2024-07-14 10:06:33.131362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.716 10:06:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:02.716 [2024-07-14 10:06:33.131423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.716 [2024-07-14 10:06:33.131484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.716 [2024-07-14 10:06:33.131549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.716 [2024-07-14 10:06:33.131624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.716 [2024-07-14 10:06:33.131688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.716 [2024-07-14 10:06:33.131739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.716 [2024-07-14 10:06:33.131803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.716 [2024-07-14 10:06:33.131854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.716 [2024-07-14 10:06:33.131919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.716 10:06:33 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:02.716 [2024-07-14 10:06:33.131978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.716 [2024-07-14 10:06:33.132036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.716 [2024-07-14 10:06:33.132087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.716 [2024-07-14 10:06:33.132143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.716 [2024-07-14 10:06:33.132194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.716 [2024-07-14 10:06:33.132288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.716 [2024-07-14 10:06:33.132359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.717 10:06:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:02.717 [2024-07-14 10:06:33.132420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.132473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.717 [2024-07-14 10:06:33.132535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.132609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.717 [2024-07-14 10:06:33.132670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.132729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.717 [2024-07-14 10:06:33.132795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 10:06:33 -- common/autotest_common.sh@10 -- # set +x 00:14:02.717 [2024-07-14 10:06:33.132855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.717 [2024-07-14 10:06:33.132919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.132977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.717 [2024-07-14 10:06:33.133038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.133089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.717 [2024-07-14 10:06:33.133151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.133243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.717 [2024-07-14 10:06:33.133326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.133382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.717 [2024-07-14 10:06:33.133441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.133494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.717 [2024-07-14 10:06:33.133558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.133636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.717 [2024-07-14 10:06:33.133710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.133762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.717 [2024-07-14 10:06:33.133819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.133876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.717 [2024-07-14 10:06:33.133935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.133993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.717 [2024-07-14 10:06:33.134072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.134132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.717 [2024-07-14 10:06:33.134193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.134280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.717 [2024-07-14 10:06:33.134352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.134413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.717 [2024-07-14 10:06:33.134492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.134577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.717 [2024-07-14 10:06:33.134657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.134726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.717 [2024-07-14 10:06:33.134892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.134963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.717 [2024-07-14 10:06:33.135028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.135079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.717 [2024-07-14 10:06:33.135139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.135191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.717 [2024-07-14 10:06:33.135301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.135379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.717 [2024-07-14 10:06:33.135457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.135512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.717 [2024-07-14 10:06:33.135607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.135668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.717 [2024-07-14 10:06:33.135747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.135814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.717 [2024-07-14 10:06:33.135893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.135966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.717 [2024-07-14 10:06:33.136044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.136105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.717 [2024-07-14 10:06:33.136167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.136248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.717 [2024-07-14 10:06:33.136317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.136371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.717 [2024-07-14 10:06:33.136445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.136518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.717 [2024-07-14 10:06:33.136594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.136669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:02.717 [2024-07-14 10:06:33.136734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.136805] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a00c0 is same with the state(5) to be set 00:14:02.717 [2024-07-14 10:06:33.136939] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x19a00c0 was disconnected and freed. reset controller. 00:14:02.717 [2024-07-14 10:06:33.137105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.717 [2024-07-14 10:06:33.137232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.137325] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.717 [2024-07-14 10:06:33.137402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.137454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.717 [2024-07-14 10:06:33.137524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.137576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:02.717 [2024-07-14 10:06:33.137658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:02.717 [2024-07-14 10:06:33.137723] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a26d0 is same with the state(5) to be set 00:14:02.717 [2024-07-14 10:06:33.138939] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:02.717 task offset: 34816 on job bdev=Nvme0n1 fails 00:14:02.717 00:14:02.717 Latency(us) 00:14:02.717 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:02.717 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:02.717 Job: Nvme0n1 ended in about 0.68 seconds with error 00:14:02.717 Verification LBA range: start 0x0 length 0x400 00:14:02.717 Nvme0n1 : 0.68 3412.75 213.30 94.68 0.00 17906.80 6911.07 25022.84 00:14:02.717 =================================================================================================================== 00:14:02.717 Total : 3412.75 213.30 94.68 0.00 17906.80 6911.07 25022.84 00:14:02.717 [2024-07-14 10:06:33.141030] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:02.717 [2024-07-14 10:06:33.141141] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a26d0 (9): Bad file descriptor 00:14:02.717 10:06:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:02.717 10:06:33 -- target/host_management.sh@87 -- # sleep 1 00:14:02.717 [2024-07-14 10:06:33.150908] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:03.650 10:06:34 -- target/host_management.sh@91 -- # kill -9 82430 00:14:03.650 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (82430) - No such process 00:14:03.650 10:06:34 -- target/host_management.sh@91 -- # true 00:14:03.650 10:06:34 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:03.650 10:06:34 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:03.650 10:06:34 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:03.650 10:06:34 -- nvmf/common.sh@520 -- # config=() 00:14:03.650 10:06:34 -- nvmf/common.sh@520 -- # local subsystem config 00:14:03.650 10:06:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:14:03.650 10:06:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:14:03.650 { 00:14:03.650 "params": { 00:14:03.650 "name": "Nvme$subsystem", 00:14:03.650 "trtype": "$TEST_TRANSPORT", 00:14:03.650 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:03.650 "adrfam": "ipv4", 00:14:03.650 "trsvcid": "$NVMF_PORT", 00:14:03.650 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:03.650 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:03.650 "hdgst": ${hdgst:-false}, 00:14:03.650 "ddgst": ${ddgst:-false} 00:14:03.650 }, 00:14:03.650 "method": "bdev_nvme_attach_controller" 00:14:03.650 } 00:14:03.650 EOF 00:14:03.650 )") 00:14:03.650 10:06:34 -- nvmf/common.sh@542 -- # cat 00:14:03.650 10:06:34 -- nvmf/common.sh@544 -- # jq . 00:14:03.650 10:06:34 -- nvmf/common.sh@545 -- # IFS=, 00:14:03.650 10:06:34 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:14:03.650 "params": { 00:14:03.650 "name": "Nvme0", 00:14:03.650 "trtype": "tcp", 00:14:03.650 "traddr": "10.0.0.2", 00:14:03.650 "adrfam": "ipv4", 00:14:03.650 "trsvcid": "4420", 00:14:03.650 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:03.650 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:03.650 "hdgst": false, 00:14:03.650 "ddgst": false 00:14:03.650 }, 00:14:03.650 "method": "bdev_nvme_attach_controller" 00:14:03.650 }' 00:14:03.909 [2024-07-14 10:06:34.205022] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:03.910 [2024-07-14 10:06:34.205112] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82480 ] 00:14:03.910 [2024-07-14 10:06:34.347402] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:03.910 [2024-07-14 10:06:34.425329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.168 Running I/O for 1 seconds... 00:14:05.105 00:14:05.106 Latency(us) 00:14:05.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:05.106 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:05.106 Verification LBA range: start 0x0 length 0x400 00:14:05.106 Nvme0n1 : 1.01 3698.09 231.13 0.00 0.00 17004.44 1504.35 22043.93 00:14:05.106 =================================================================================================================== 00:14:05.106 Total : 3698.09 231.13 0.00 0.00 17004.44 1504.35 22043.93 00:14:05.365 10:06:35 -- target/host_management.sh@101 -- # stoptarget 00:14:05.365 10:06:35 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:05.365 10:06:35 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:14:05.365 10:06:35 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:14:05.365 10:06:35 -- target/host_management.sh@40 -- # nvmftestfini 00:14:05.365 10:06:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:05.365 10:06:35 -- nvmf/common.sh@116 -- # sync 00:14:05.365 10:06:35 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:05.365 10:06:35 -- nvmf/common.sh@119 -- # set +e 00:14:05.365 10:06:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:05.365 10:06:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:05.365 rmmod nvme_tcp 00:14:05.365 rmmod nvme_fabrics 00:14:05.365 rmmod nvme_keyring 00:14:05.624 10:06:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:05.624 10:06:35 -- nvmf/common.sh@123 -- # set -e 00:14:05.624 10:06:35 -- nvmf/common.sh@124 -- # return 0 00:14:05.624 10:06:35 -- nvmf/common.sh@477 -- # '[' -n 82358 ']' 00:14:05.624 10:06:35 -- nvmf/common.sh@478 -- # killprocess 82358 00:14:05.624 10:06:35 -- common/autotest_common.sh@926 -- # '[' -z 82358 ']' 00:14:05.624 10:06:35 -- common/autotest_common.sh@930 -- # kill -0 82358 00:14:05.624 10:06:35 -- common/autotest_common.sh@931 -- # uname 00:14:05.624 10:06:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:05.624 10:06:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82358 00:14:05.624 killing process with pid 82358 00:14:05.624 10:06:35 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:05.624 10:06:35 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:05.624 10:06:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82358' 00:14:05.624 10:06:35 -- common/autotest_common.sh@945 -- # kill 82358 00:14:05.624 10:06:35 -- common/autotest_common.sh@950 -- # wait 82358 00:14:05.624 [2024-07-14 10:06:36.143847] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:05.624 10:06:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:05.624 10:06:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:05.624 10:06:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:05.624 10:06:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:05.624 10:06:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:05.624 10:06:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.624 10:06:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:05.624 10:06:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.884 10:06:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:05.884 00:14:05.884 real 0m5.430s 00:14:05.884 user 0m22.832s 00:14:05.884 sys 0m1.318s 00:14:05.884 10:06:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:05.884 10:06:36 -- common/autotest_common.sh@10 -- # set +x 00:14:05.884 ************************************ 00:14:05.884 END TEST nvmf_host_management 00:14:05.884 ************************************ 00:14:05.884 10:06:36 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:14:05.884 00:14:05.884 real 0m5.937s 00:14:05.884 user 0m22.947s 00:14:05.884 sys 0m1.559s 00:14:05.884 10:06:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:05.884 10:06:36 -- common/autotest_common.sh@10 -- # set +x 00:14:05.884 ************************************ 00:14:05.884 END TEST nvmf_host_management 00:14:05.884 ************************************ 00:14:05.884 10:06:36 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:05.884 10:06:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:05.884 10:06:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:05.884 10:06:36 -- common/autotest_common.sh@10 -- # set +x 00:14:05.884 ************************************ 00:14:05.884 START TEST nvmf_lvol 00:14:05.884 ************************************ 00:14:05.884 10:06:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:05.884 * Looking for test storage... 00:14:05.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:05.884 10:06:36 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:05.884 10:06:36 -- nvmf/common.sh@7 -- # uname -s 00:14:05.884 10:06:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:05.884 10:06:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:05.884 10:06:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:05.884 10:06:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:05.884 10:06:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:05.884 10:06:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:05.884 10:06:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:05.884 10:06:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:05.884 10:06:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:05.884 10:06:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:05.884 10:06:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:14:05.884 10:06:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:14:05.884 10:06:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:05.884 10:06:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:05.884 10:06:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:05.884 10:06:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:05.884 10:06:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:05.884 10:06:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:05.884 10:06:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:05.885 10:06:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.885 10:06:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.885 10:06:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.885 10:06:36 -- paths/export.sh@5 -- # export PATH 00:14:05.885 10:06:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:05.885 10:06:36 -- nvmf/common.sh@46 -- # : 0 00:14:05.885 10:06:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:05.885 10:06:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:05.885 10:06:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:05.885 10:06:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:05.885 10:06:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:05.885 10:06:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:05.885 10:06:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:05.885 10:06:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:05.885 10:06:36 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:05.885 10:06:36 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:05.885 10:06:36 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:05.885 10:06:36 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:05.885 10:06:36 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:05.885 10:06:36 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:05.885 10:06:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:05.885 10:06:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:05.885 10:06:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:05.885 10:06:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:05.885 10:06:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:05.885 10:06:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:05.885 10:06:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:05.885 10:06:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:05.885 10:06:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:05.885 10:06:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:05.885 10:06:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:05.885 10:06:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:05.885 10:06:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:05.885 10:06:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:05.885 10:06:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:05.885 10:06:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:05.885 10:06:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:05.885 10:06:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:05.885 10:06:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:05.885 10:06:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:05.885 10:06:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:05.885 10:06:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:05.885 10:06:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:05.885 10:06:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:05.885 10:06:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:05.885 10:06:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:05.885 10:06:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:05.885 10:06:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:06.144 Cannot find device "nvmf_tgt_br" 00:14:06.144 10:06:36 -- nvmf/common.sh@154 -- # true 00:14:06.144 10:06:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:06.144 Cannot find device "nvmf_tgt_br2" 00:14:06.144 10:06:36 -- nvmf/common.sh@155 -- # true 00:14:06.144 10:06:36 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:06.144 10:06:36 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:06.144 Cannot find device "nvmf_tgt_br" 00:14:06.144 10:06:36 -- nvmf/common.sh@157 -- # true 00:14:06.144 10:06:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:06.144 Cannot find device "nvmf_tgt_br2" 00:14:06.144 10:06:36 -- nvmf/common.sh@158 -- # true 00:14:06.144 10:06:36 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:06.145 10:06:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:06.145 10:06:36 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:06.145 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:06.145 10:06:36 -- nvmf/common.sh@161 -- # true 00:14:06.145 10:06:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:06.145 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:06.145 10:06:36 -- nvmf/common.sh@162 -- # true 00:14:06.145 10:06:36 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:06.145 10:06:36 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:06.145 10:06:36 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:06.145 10:06:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:06.145 10:06:36 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:06.145 10:06:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:06.145 10:06:36 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:06.145 10:06:36 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:06.145 10:06:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:06.145 10:06:36 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:06.145 10:06:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:06.145 10:06:36 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:06.145 10:06:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:06.145 10:06:36 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:06.145 10:06:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:06.145 10:06:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:06.145 10:06:36 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:06.145 10:06:36 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:06.145 10:06:36 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:06.145 10:06:36 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:06.145 10:06:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:06.404 10:06:36 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:06.404 10:06:36 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:06.404 10:06:36 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:06.404 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:06.404 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:14:06.404 00:14:06.404 --- 10.0.0.2 ping statistics --- 00:14:06.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.404 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:14:06.404 10:06:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:06.404 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:06.404 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:14:06.404 00:14:06.404 --- 10.0.0.3 ping statistics --- 00:14:06.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.404 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:14:06.404 10:06:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:06.404 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:06.404 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:06.404 00:14:06.404 --- 10.0.0.1 ping statistics --- 00:14:06.404 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.404 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:06.404 10:06:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:06.404 10:06:36 -- nvmf/common.sh@421 -- # return 0 00:14:06.404 10:06:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:06.404 10:06:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:06.404 10:06:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:06.404 10:06:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:06.404 10:06:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:06.404 10:06:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:06.404 10:06:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:06.404 10:06:36 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:06.404 10:06:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:06.404 10:06:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:06.404 10:06:36 -- common/autotest_common.sh@10 -- # set +x 00:14:06.404 10:06:36 -- nvmf/common.sh@469 -- # nvmfpid=82703 00:14:06.404 10:06:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:06.404 10:06:36 -- nvmf/common.sh@470 -- # waitforlisten 82703 00:14:06.404 10:06:36 -- common/autotest_common.sh@819 -- # '[' -z 82703 ']' 00:14:06.404 10:06:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.404 10:06:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:06.404 10:06:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.404 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.404 10:06:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:06.404 10:06:36 -- common/autotest_common.sh@10 -- # set +x 00:14:06.404 [2024-07-14 10:06:36.795346] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:06.404 [2024-07-14 10:06:36.795407] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.404 [2024-07-14 10:06:36.929220] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:06.663 [2024-07-14 10:06:37.007879] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:06.663 [2024-07-14 10:06:37.008024] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.663 [2024-07-14 10:06:37.008036] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.663 [2024-07-14 10:06:37.008045] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.663 [2024-07-14 10:06:37.008237] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.663 [2024-07-14 10:06:37.008339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.663 [2024-07-14 10:06:37.008344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.276 10:06:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:07.276 10:06:37 -- common/autotest_common.sh@852 -- # return 0 00:14:07.276 10:06:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:07.276 10:06:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:07.276 10:06:37 -- common/autotest_common.sh@10 -- # set +x 00:14:07.543 10:06:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:07.543 10:06:37 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:07.543 [2024-07-14 10:06:38.064051] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:07.544 10:06:38 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:07.802 10:06:38 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:07.802 10:06:38 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:08.061 10:06:38 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:08.061 10:06:38 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:08.320 10:06:38 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:08.579 10:06:39 -- target/nvmf_lvol.sh@29 -- # lvs=22895227-930f-40d1-b69c-c054bc8b341d 00:14:08.579 10:06:39 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 22895227-930f-40d1-b69c-c054bc8b341d lvol 20 00:14:08.838 10:06:39 -- target/nvmf_lvol.sh@32 -- # lvol=8859b43b-36ac-4243-bcf7-17db7cd12ed7 00:14:08.838 10:06:39 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:09.097 10:06:39 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8859b43b-36ac-4243-bcf7-17db7cd12ed7 00:14:09.355 10:06:39 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:09.613 [2024-07-14 10:06:39.953844] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:09.614 10:06:39 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:09.872 10:06:40 -- target/nvmf_lvol.sh@42 -- # perf_pid=82849 00:14:09.872 10:06:40 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:09.872 10:06:40 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:10.806 10:06:41 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 8859b43b-36ac-4243-bcf7-17db7cd12ed7 MY_SNAPSHOT 00:14:11.065 10:06:41 -- target/nvmf_lvol.sh@47 -- # snapshot=abd6bfb0-e667-4532-8f2b-cbfe999a2000 00:14:11.065 10:06:41 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 8859b43b-36ac-4243-bcf7-17db7cd12ed7 30 00:14:11.323 10:06:41 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone abd6bfb0-e667-4532-8f2b-cbfe999a2000 MY_CLONE 00:14:11.582 10:06:42 -- target/nvmf_lvol.sh@49 -- # clone=82e172a3-46a2-4d1d-8e6a-9f0b9b8e4b84 00:14:11.582 10:06:42 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 82e172a3-46a2-4d1d-8e6a-9f0b9b8e4b84 00:14:12.149 10:06:42 -- target/nvmf_lvol.sh@53 -- # wait 82849 00:14:20.263 Initializing NVMe Controllers 00:14:20.263 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:14:20.263 Controller IO queue size 128, less than required. 00:14:20.263 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:20.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:14:20.263 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:14:20.263 Initialization complete. Launching workers. 00:14:20.263 ======================================================== 00:14:20.263 Latency(us) 00:14:20.263 Device Information : IOPS MiB/s Average min max 00:14:20.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11528.10 45.03 11110.53 1684.30 68391.65 00:14:20.263 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11689.80 45.66 10957.48 1574.08 79551.83 00:14:20.263 ======================================================== 00:14:20.263 Total : 23217.90 90.69 11033.47 1574.08 79551.83 00:14:20.263 00:14:20.263 10:06:50 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:20.263 10:06:50 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 8859b43b-36ac-4243-bcf7-17db7cd12ed7 00:14:20.521 10:06:50 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 22895227-930f-40d1-b69c-c054bc8b341d 00:14:20.779 10:06:51 -- target/nvmf_lvol.sh@60 -- # rm -f 00:14:20.779 10:06:51 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:14:20.779 10:06:51 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:14:20.779 10:06:51 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:20.779 10:06:51 -- nvmf/common.sh@116 -- # sync 00:14:20.779 10:06:51 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:20.779 10:06:51 -- nvmf/common.sh@119 -- # set +e 00:14:20.779 10:06:51 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:20.779 10:06:51 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:20.779 rmmod nvme_tcp 00:14:20.779 rmmod nvme_fabrics 00:14:20.779 rmmod nvme_keyring 00:14:20.779 10:06:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:20.779 10:06:51 -- nvmf/common.sh@123 -- # set -e 00:14:20.779 10:06:51 -- nvmf/common.sh@124 -- # return 0 00:14:20.779 10:06:51 -- nvmf/common.sh@477 -- # '[' -n 82703 ']' 00:14:20.779 10:06:51 -- nvmf/common.sh@478 -- # killprocess 82703 00:14:20.779 10:06:51 -- common/autotest_common.sh@926 -- # '[' -z 82703 ']' 00:14:20.779 10:06:51 -- common/autotest_common.sh@930 -- # kill -0 82703 00:14:20.779 10:06:51 -- common/autotest_common.sh@931 -- # uname 00:14:20.779 10:06:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:20.779 10:06:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82703 00:14:20.779 killing process with pid 82703 00:14:20.779 10:06:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:20.779 10:06:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:20.779 10:06:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82703' 00:14:20.779 10:06:51 -- common/autotest_common.sh@945 -- # kill 82703 00:14:20.779 10:06:51 -- common/autotest_common.sh@950 -- # wait 82703 00:14:21.037 10:06:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:21.038 10:06:51 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:21.038 10:06:51 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:21.038 10:06:51 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:21.038 10:06:51 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:21.038 10:06:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.038 10:06:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.038 10:06:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.296 10:06:51 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:21.297 ************************************ 00:14:21.297 END TEST nvmf_lvol 00:14:21.297 ************************************ 00:14:21.297 00:14:21.297 real 0m15.298s 00:14:21.297 user 1m4.334s 00:14:21.297 sys 0m3.721s 00:14:21.297 10:06:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:21.297 10:06:51 -- common/autotest_common.sh@10 -- # set +x 00:14:21.297 10:06:51 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:21.297 10:06:51 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:21.297 10:06:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:21.297 10:06:51 -- common/autotest_common.sh@10 -- # set +x 00:14:21.297 ************************************ 00:14:21.297 START TEST nvmf_lvs_grow 00:14:21.297 ************************************ 00:14:21.297 10:06:51 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:14:21.297 * Looking for test storage... 00:14:21.297 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:21.297 10:06:51 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:21.297 10:06:51 -- nvmf/common.sh@7 -- # uname -s 00:14:21.297 10:06:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.297 10:06:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.297 10:06:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.297 10:06:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.297 10:06:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:21.297 10:06:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:21.297 10:06:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.297 10:06:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:21.297 10:06:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.297 10:06:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:21.297 10:06:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:14:21.297 10:06:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:14:21.297 10:06:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.297 10:06:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:21.297 10:06:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:21.297 10:06:51 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:21.297 10:06:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.297 10:06:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.297 10:06:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.297 10:06:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.297 10:06:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.297 10:06:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.297 10:06:51 -- paths/export.sh@5 -- # export PATH 00:14:21.297 10:06:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.297 10:06:51 -- nvmf/common.sh@46 -- # : 0 00:14:21.297 10:06:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:21.297 10:06:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:21.297 10:06:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:21.297 10:06:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.297 10:06:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.297 10:06:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:21.297 10:06:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:21.297 10:06:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:21.297 10:06:51 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:21.297 10:06:51 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:14:21.297 10:06:51 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:14:21.297 10:06:51 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:21.297 10:06:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:21.297 10:06:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:21.297 10:06:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:21.297 10:06:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:21.297 10:06:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.297 10:06:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.297 10:06:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.297 10:06:51 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:21.297 10:06:51 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:21.297 10:06:51 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:21.297 10:06:51 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:21.297 10:06:51 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:21.297 10:06:51 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:21.297 10:06:51 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:21.297 10:06:51 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:21.297 10:06:51 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:21.297 10:06:51 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:21.297 10:06:51 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:21.297 10:06:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:21.297 10:06:51 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:21.297 10:06:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:21.297 10:06:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:21.297 10:06:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:21.297 10:06:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:21.297 10:06:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:21.297 10:06:51 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:21.297 10:06:51 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:21.297 Cannot find device "nvmf_tgt_br" 00:14:21.297 10:06:51 -- nvmf/common.sh@154 -- # true 00:14:21.297 10:06:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:21.297 Cannot find device "nvmf_tgt_br2" 00:14:21.297 10:06:51 -- nvmf/common.sh@155 -- # true 00:14:21.297 10:06:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:21.297 10:06:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:21.297 Cannot find device "nvmf_tgt_br" 00:14:21.297 10:06:51 -- nvmf/common.sh@157 -- # true 00:14:21.297 10:06:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:21.297 Cannot find device "nvmf_tgt_br2" 00:14:21.297 10:06:51 -- nvmf/common.sh@158 -- # true 00:14:21.297 10:06:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:21.556 10:06:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:21.556 10:06:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:21.556 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:21.556 10:06:51 -- nvmf/common.sh@161 -- # true 00:14:21.556 10:06:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:21.556 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:21.556 10:06:51 -- nvmf/common.sh@162 -- # true 00:14:21.556 10:06:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:21.556 10:06:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:21.556 10:06:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:21.556 10:06:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:21.556 10:06:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:21.556 10:06:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:21.556 10:06:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:21.556 10:06:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:21.556 10:06:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:21.556 10:06:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:21.556 10:06:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:21.556 10:06:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:21.556 10:06:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:21.556 10:06:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:21.556 10:06:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:21.556 10:06:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:21.556 10:06:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:21.556 10:06:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:21.556 10:06:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:21.556 10:06:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:21.556 10:06:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:21.556 10:06:52 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:21.556 10:06:52 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:21.556 10:06:52 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:21.556 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:21.556 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.086 ms 00:14:21.556 00:14:21.556 --- 10.0.0.2 ping statistics --- 00:14:21.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.556 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:14:21.556 10:06:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:21.556 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:21.556 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:14:21.556 00:14:21.556 --- 10.0.0.3 ping statistics --- 00:14:21.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.556 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:14:21.556 10:06:52 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:21.556 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:21.556 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:14:21.556 00:14:21.556 --- 10.0.0.1 ping statistics --- 00:14:21.556 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.556 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:14:21.556 10:06:52 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:21.556 10:06:52 -- nvmf/common.sh@421 -- # return 0 00:14:21.556 10:06:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:21.556 10:06:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:21.556 10:06:52 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:21.556 10:06:52 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:21.556 10:06:52 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:21.556 10:06:52 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:21.556 10:06:52 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:21.556 10:06:52 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:14:21.556 10:06:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:21.556 10:06:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:21.556 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:14:21.556 10:06:52 -- nvmf/common.sh@469 -- # nvmfpid=83206 00:14:21.556 10:06:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:21.556 10:06:52 -- nvmf/common.sh@470 -- # waitforlisten 83206 00:14:21.556 10:06:52 -- common/autotest_common.sh@819 -- # '[' -z 83206 ']' 00:14:21.556 10:06:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.556 10:06:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:21.556 10:06:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.556 10:06:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:21.556 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:14:21.815 [2024-07-14 10:06:52.131093] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:21.815 [2024-07-14 10:06:52.131428] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.815 [2024-07-14 10:06:52.271876] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:21.816 [2024-07-14 10:06:52.326535] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:21.816 [2024-07-14 10:06:52.326980] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.816 [2024-07-14 10:06:52.327002] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.816 [2024-07-14 10:06:52.327011] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.816 [2024-07-14 10:06:52.327039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.752 10:06:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:22.752 10:06:52 -- common/autotest_common.sh@852 -- # return 0 00:14:22.752 10:06:52 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:22.752 10:06:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:22.752 10:06:52 -- common/autotest_common.sh@10 -- # set +x 00:14:22.752 10:06:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.752 10:06:53 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:22.752 [2024-07-14 10:06:53.184941] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.752 10:06:53 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:14:22.752 10:06:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:14:22.752 10:06:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:22.752 10:06:53 -- common/autotest_common.sh@10 -- # set +x 00:14:22.752 ************************************ 00:14:22.752 START TEST lvs_grow_clean 00:14:22.752 ************************************ 00:14:22.752 10:06:53 -- common/autotest_common.sh@1104 -- # lvs_grow 00:14:22.752 10:06:53 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:22.752 10:06:53 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:22.752 10:06:53 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:22.752 10:06:53 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:22.752 10:06:53 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:22.752 10:06:53 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:22.752 10:06:53 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:22.752 10:06:53 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:22.752 10:06:53 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:23.092 10:06:53 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:23.092 10:06:53 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:23.363 10:06:53 -- target/nvmf_lvs_grow.sh@28 -- # lvs=e7bd3fbd-1395-4df8-853d-c2c68207889c 00:14:23.364 10:06:53 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7bd3fbd-1395-4df8-853d-c2c68207889c 00:14:23.364 10:06:53 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:23.621 10:06:54 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:23.621 10:06:54 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:23.621 10:06:54 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u e7bd3fbd-1395-4df8-853d-c2c68207889c lvol 150 00:14:23.879 10:06:54 -- target/nvmf_lvs_grow.sh@33 -- # lvol=28b9f3fd-7011-4c3e-886c-b0d027de754d 00:14:23.879 10:06:54 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:23.879 10:06:54 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:24.137 [2024-07-14 10:06:54.492984] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:24.137 [2024-07-14 10:06:54.493062] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:24.137 true 00:14:24.137 10:06:54 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7bd3fbd-1395-4df8-853d-c2c68207889c 00:14:24.137 10:06:54 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:24.396 10:06:54 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:24.396 10:06:54 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:24.396 10:06:54 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 28b9f3fd-7011-4c3e-886c-b0d027de754d 00:14:24.654 10:06:55 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:24.913 [2024-07-14 10:06:55.277439] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:24.913 10:06:55 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:25.171 10:06:55 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=83362 00:14:25.171 10:06:55 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:25.171 10:06:55 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:25.171 10:06:55 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 83362 /var/tmp/bdevperf.sock 00:14:25.171 10:06:55 -- common/autotest_common.sh@819 -- # '[' -z 83362 ']' 00:14:25.171 10:06:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:25.171 10:06:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:25.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:25.171 10:06:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:25.171 10:06:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:25.171 10:06:55 -- common/autotest_common.sh@10 -- # set +x 00:14:25.171 [2024-07-14 10:06:55.537988] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:25.171 [2024-07-14 10:06:55.538099] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83362 ] 00:14:25.171 [2024-07-14 10:06:55.678891] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.429 [2024-07-14 10:06:55.741146] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.995 10:06:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:25.995 10:06:56 -- common/autotest_common.sh@852 -- # return 0 00:14:25.995 10:06:56 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:26.253 Nvme0n1 00:14:26.253 10:06:56 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:26.512 [ 00:14:26.512 { 00:14:26.512 "aliases": [ 00:14:26.512 "28b9f3fd-7011-4c3e-886c-b0d027de754d" 00:14:26.512 ], 00:14:26.512 "assigned_rate_limits": { 00:14:26.512 "r_mbytes_per_sec": 0, 00:14:26.512 "rw_ios_per_sec": 0, 00:14:26.512 "rw_mbytes_per_sec": 0, 00:14:26.512 "w_mbytes_per_sec": 0 00:14:26.512 }, 00:14:26.512 "block_size": 4096, 00:14:26.512 "claimed": false, 00:14:26.512 "driver_specific": { 00:14:26.512 "mp_policy": "active_passive", 00:14:26.512 "nvme": [ 00:14:26.512 { 00:14:26.512 "ctrlr_data": { 00:14:26.512 "ana_reporting": false, 00:14:26.512 "cntlid": 1, 00:14:26.512 "firmware_revision": "24.01.1", 00:14:26.512 "model_number": "SPDK bdev Controller", 00:14:26.512 "multi_ctrlr": true, 00:14:26.512 "oacs": { 00:14:26.512 "firmware": 0, 00:14:26.512 "format": 0, 00:14:26.512 "ns_manage": 0, 00:14:26.512 "security": 0 00:14:26.512 }, 00:14:26.512 "serial_number": "SPDK0", 00:14:26.512 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:26.512 "vendor_id": "0x8086" 00:14:26.512 }, 00:14:26.512 "ns_data": { 00:14:26.512 "can_share": true, 00:14:26.512 "id": 1 00:14:26.512 }, 00:14:26.512 "trid": { 00:14:26.512 "adrfam": "IPv4", 00:14:26.512 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:26.512 "traddr": "10.0.0.2", 00:14:26.512 "trsvcid": "4420", 00:14:26.512 "trtype": "TCP" 00:14:26.512 }, 00:14:26.512 "vs": { 00:14:26.512 "nvme_version": "1.3" 00:14:26.512 } 00:14:26.512 } 00:14:26.512 ] 00:14:26.512 }, 00:14:26.512 "name": "Nvme0n1", 00:14:26.512 "num_blocks": 38912, 00:14:26.512 "product_name": "NVMe disk", 00:14:26.512 "supported_io_types": { 00:14:26.512 "abort": true, 00:14:26.512 "compare": true, 00:14:26.512 "compare_and_write": true, 00:14:26.512 "flush": true, 00:14:26.512 "nvme_admin": true, 00:14:26.512 "nvme_io": true, 00:14:26.512 "read": true, 00:14:26.512 "reset": true, 00:14:26.512 "unmap": true, 00:14:26.512 "write": true, 00:14:26.512 "write_zeroes": true 00:14:26.512 }, 00:14:26.512 "uuid": "28b9f3fd-7011-4c3e-886c-b0d027de754d", 00:14:26.512 "zoned": false 00:14:26.512 } 00:14:26.512 ] 00:14:26.512 10:06:56 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:26.512 10:06:56 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=83410 00:14:26.512 10:06:56 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:26.512 Running I/O for 10 seconds... 00:14:27.449 Latency(us) 00:14:27.449 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:27.449 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:27.449 Nvme0n1 : 1.00 7530.00 29.41 0.00 0.00 0.00 0.00 0.00 00:14:27.449 =================================================================================================================== 00:14:27.449 Total : 7530.00 29.41 0.00 0.00 0.00 0.00 0.00 00:14:27.449 00:14:28.386 10:06:58 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e7bd3fbd-1395-4df8-853d-c2c68207889c 00:14:28.646 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:28.646 Nvme0n1 : 2.00 7365.00 28.77 0.00 0.00 0.00 0.00 0.00 00:14:28.646 =================================================================================================================== 00:14:28.646 Total : 7365.00 28.77 0.00 0.00 0.00 0.00 0.00 00:14:28.646 00:14:28.904 true 00:14:28.904 10:06:59 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7bd3fbd-1395-4df8-853d-c2c68207889c 00:14:28.904 10:06:59 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:29.163 10:06:59 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:29.163 10:06:59 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:29.163 10:06:59 -- target/nvmf_lvs_grow.sh@65 -- # wait 83410 00:14:29.731 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:29.731 Nvme0n1 : 3.00 7446.33 29.09 0.00 0.00 0.00 0.00 0.00 00:14:29.731 =================================================================================================================== 00:14:29.731 Total : 7446.33 29.09 0.00 0.00 0.00 0.00 0.00 00:14:29.731 00:14:30.667 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:30.667 Nvme0n1 : 4.00 7473.75 29.19 0.00 0.00 0.00 0.00 0.00 00:14:30.667 =================================================================================================================== 00:14:30.667 Total : 7473.75 29.19 0.00 0.00 0.00 0.00 0.00 00:14:30.667 00:14:31.602 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:31.602 Nvme0n1 : 5.00 7471.20 29.18 0.00 0.00 0.00 0.00 0.00 00:14:31.602 =================================================================================================================== 00:14:31.602 Total : 7471.20 29.18 0.00 0.00 0.00 0.00 0.00 00:14:31.602 00:14:32.537 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:32.537 Nvme0n1 : 6.00 7463.33 29.15 0.00 0.00 0.00 0.00 0.00 00:14:32.537 =================================================================================================================== 00:14:32.537 Total : 7463.33 29.15 0.00 0.00 0.00 0.00 0.00 00:14:32.537 00:14:33.481 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:33.481 Nvme0n1 : 7.00 7447.57 29.09 0.00 0.00 0.00 0.00 0.00 00:14:33.481 =================================================================================================================== 00:14:33.481 Total : 7447.57 29.09 0.00 0.00 0.00 0.00 0.00 00:14:33.481 00:14:34.856 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:34.856 Nvme0n1 : 8.00 7434.88 29.04 0.00 0.00 0.00 0.00 0.00 00:14:34.856 =================================================================================================================== 00:14:34.856 Total : 7434.88 29.04 0.00 0.00 0.00 0.00 0.00 00:14:34.856 00:14:35.793 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:35.793 Nvme0n1 : 9.00 7447.22 29.09 0.00 0.00 0.00 0.00 0.00 00:14:35.793 =================================================================================================================== 00:14:35.793 Total : 7447.22 29.09 0.00 0.00 0.00 0.00 0.00 00:14:35.793 00:14:36.730 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:36.730 Nvme0n1 : 10.00 7449.70 29.10 0.00 0.00 0.00 0.00 0.00 00:14:36.730 =================================================================================================================== 00:14:36.730 Total : 7449.70 29.10 0.00 0.00 0.00 0.00 0.00 00:14:36.730 00:14:36.730 00:14:36.730 Latency(us) 00:14:36.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.730 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:36.730 Nvme0n1 : 10.02 7450.71 29.10 0.00 0.00 17174.32 7864.32 36223.53 00:14:36.730 =================================================================================================================== 00:14:36.730 Total : 7450.71 29.10 0.00 0.00 17174.32 7864.32 36223.53 00:14:36.730 0 00:14:36.730 10:07:07 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 83362 00:14:36.730 10:07:07 -- common/autotest_common.sh@926 -- # '[' -z 83362 ']' 00:14:36.730 10:07:07 -- common/autotest_common.sh@930 -- # kill -0 83362 00:14:36.730 10:07:07 -- common/autotest_common.sh@931 -- # uname 00:14:36.730 10:07:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:36.730 10:07:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83362 00:14:36.730 10:07:07 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:36.730 killing process with pid 83362 00:14:36.730 10:07:07 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:36.730 10:07:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83362' 00:14:36.730 10:07:07 -- common/autotest_common.sh@945 -- # kill 83362 00:14:36.730 Received shutdown signal, test time was about 10.000000 seconds 00:14:36.730 00:14:36.730 Latency(us) 00:14:36.730 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.730 =================================================================================================================== 00:14:36.730 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:36.730 10:07:07 -- common/autotest_common.sh@950 -- # wait 83362 00:14:36.730 10:07:07 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:36.987 10:07:07 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7bd3fbd-1395-4df8-853d-c2c68207889c 00:14:36.987 10:07:07 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:37.248 10:07:07 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:37.248 10:07:07 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:14:37.248 10:07:07 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:37.507 [2024-07-14 10:07:07.963867] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:37.507 10:07:07 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7bd3fbd-1395-4df8-853d-c2c68207889c 00:14:37.507 10:07:07 -- common/autotest_common.sh@640 -- # local es=0 00:14:37.507 10:07:07 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7bd3fbd-1395-4df8-853d-c2c68207889c 00:14:37.507 10:07:07 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:37.507 10:07:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:37.507 10:07:07 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:37.507 10:07:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:37.507 10:07:08 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:37.507 10:07:07 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:37.507 10:07:08 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:37.507 10:07:08 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:37.507 10:07:08 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7bd3fbd-1395-4df8-853d-c2c68207889c 00:14:37.766 2024/07/14 10:07:08 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:e7bd3fbd-1395-4df8-853d-c2c68207889c], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:37.766 request: 00:14:37.766 { 00:14:37.766 "method": "bdev_lvol_get_lvstores", 00:14:37.766 "params": { 00:14:37.766 "uuid": "e7bd3fbd-1395-4df8-853d-c2c68207889c" 00:14:37.766 } 00:14:37.766 } 00:14:37.766 Got JSON-RPC error response 00:14:37.766 GoRPCClient: error on JSON-RPC call 00:14:37.766 10:07:08 -- common/autotest_common.sh@643 -- # es=1 00:14:37.766 10:07:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:37.766 10:07:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:37.766 10:07:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:37.766 10:07:08 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:38.025 aio_bdev 00:14:38.025 10:07:08 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 28b9f3fd-7011-4c3e-886c-b0d027de754d 00:14:38.025 10:07:08 -- common/autotest_common.sh@887 -- # local bdev_name=28b9f3fd-7011-4c3e-886c-b0d027de754d 00:14:38.025 10:07:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:38.025 10:07:08 -- common/autotest_common.sh@889 -- # local i 00:14:38.025 10:07:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:38.025 10:07:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:38.025 10:07:08 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:38.284 10:07:08 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 28b9f3fd-7011-4c3e-886c-b0d027de754d -t 2000 00:14:38.542 [ 00:14:38.542 { 00:14:38.542 "aliases": [ 00:14:38.542 "lvs/lvol" 00:14:38.542 ], 00:14:38.542 "assigned_rate_limits": { 00:14:38.542 "r_mbytes_per_sec": 0, 00:14:38.542 "rw_ios_per_sec": 0, 00:14:38.542 "rw_mbytes_per_sec": 0, 00:14:38.542 "w_mbytes_per_sec": 0 00:14:38.542 }, 00:14:38.542 "block_size": 4096, 00:14:38.542 "claimed": false, 00:14:38.542 "driver_specific": { 00:14:38.542 "lvol": { 00:14:38.542 "base_bdev": "aio_bdev", 00:14:38.542 "clone": false, 00:14:38.542 "esnap_clone": false, 00:14:38.542 "lvol_store_uuid": "e7bd3fbd-1395-4df8-853d-c2c68207889c", 00:14:38.542 "snapshot": false, 00:14:38.542 "thin_provision": false 00:14:38.542 } 00:14:38.542 }, 00:14:38.542 "name": "28b9f3fd-7011-4c3e-886c-b0d027de754d", 00:14:38.542 "num_blocks": 38912, 00:14:38.542 "product_name": "Logical Volume", 00:14:38.542 "supported_io_types": { 00:14:38.542 "abort": false, 00:14:38.542 "compare": false, 00:14:38.542 "compare_and_write": false, 00:14:38.542 "flush": false, 00:14:38.542 "nvme_admin": false, 00:14:38.542 "nvme_io": false, 00:14:38.542 "read": true, 00:14:38.542 "reset": true, 00:14:38.542 "unmap": true, 00:14:38.542 "write": true, 00:14:38.542 "write_zeroes": true 00:14:38.542 }, 00:14:38.542 "uuid": "28b9f3fd-7011-4c3e-886c-b0d027de754d", 00:14:38.542 "zoned": false 00:14:38.542 } 00:14:38.542 ] 00:14:38.542 10:07:08 -- common/autotest_common.sh@895 -- # return 0 00:14:38.542 10:07:08 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7bd3fbd-1395-4df8-853d-c2c68207889c 00:14:38.542 10:07:08 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:38.801 10:07:09 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:38.801 10:07:09 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:38.801 10:07:09 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7bd3fbd-1395-4df8-853d-c2c68207889c 00:14:39.058 10:07:09 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:39.058 10:07:09 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 28b9f3fd-7011-4c3e-886c-b0d027de754d 00:14:39.316 10:07:09 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e7bd3fbd-1395-4df8-853d-c2c68207889c 00:14:39.316 10:07:09 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:39.574 10:07:09 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:39.832 ************************************ 00:14:39.832 END TEST lvs_grow_clean 00:14:39.832 ************************************ 00:14:39.832 00:14:39.832 real 0m17.056s 00:14:39.832 user 0m16.539s 00:14:39.832 sys 0m1.920s 00:14:39.832 10:07:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:39.832 10:07:10 -- common/autotest_common.sh@10 -- # set +x 00:14:39.832 10:07:10 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:14:39.832 10:07:10 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:39.832 10:07:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:39.832 10:07:10 -- common/autotest_common.sh@10 -- # set +x 00:14:39.832 ************************************ 00:14:39.832 START TEST lvs_grow_dirty 00:14:39.832 ************************************ 00:14:39.832 10:07:10 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:14:39.832 10:07:10 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:14:39.832 10:07:10 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:14:39.832 10:07:10 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:14:39.832 10:07:10 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:14:39.832 10:07:10 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:14:39.832 10:07:10 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:14:39.832 10:07:10 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:39.832 10:07:10 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:39.832 10:07:10 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:40.090 10:07:10 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:14:40.090 10:07:10 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:14:40.348 10:07:10 -- target/nvmf_lvs_grow.sh@28 -- # lvs=d64ab936-e76c-47b7-b3fa-c565ab1a0762 00:14:40.348 10:07:10 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d64ab936-e76c-47b7-b3fa-c565ab1a0762 00:14:40.348 10:07:10 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:14:40.606 10:07:11 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:14:40.606 10:07:11 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:14:40.606 10:07:11 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d64ab936-e76c-47b7-b3fa-c565ab1a0762 lvol 150 00:14:40.865 10:07:11 -- target/nvmf_lvs_grow.sh@33 -- # lvol=40841313-ed00-491e-88c5-4590003474d4 00:14:40.865 10:07:11 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:40.865 10:07:11 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:14:41.124 [2024-07-14 10:07:11.539092] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:14:41.124 [2024-07-14 10:07:11.539187] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:14:41.124 true 00:14:41.124 10:07:11 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d64ab936-e76c-47b7-b3fa-c565ab1a0762 00:14:41.124 10:07:11 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:14:41.382 10:07:11 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:14:41.383 10:07:11 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:41.641 10:07:11 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 40841313-ed00-491e-88c5-4590003474d4 00:14:41.641 10:07:12 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:41.900 10:07:12 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:42.159 10:07:12 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:14:42.159 10:07:12 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=83787 00:14:42.159 10:07:12 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:42.159 10:07:12 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 83787 /var/tmp/bdevperf.sock 00:14:42.159 10:07:12 -- common/autotest_common.sh@819 -- # '[' -z 83787 ']' 00:14:42.159 10:07:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:42.159 10:07:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:42.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:42.159 10:07:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:42.159 10:07:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:42.159 10:07:12 -- common/autotest_common.sh@10 -- # set +x 00:14:42.159 [2024-07-14 10:07:12.648474] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:42.159 [2024-07-14 10:07:12.648564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83787 ] 00:14:42.417 [2024-07-14 10:07:12.785978] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:42.417 [2024-07-14 10:07:12.884176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:42.984 10:07:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:42.984 10:07:13 -- common/autotest_common.sh@852 -- # return 0 00:14:42.984 10:07:13 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:14:43.549 Nvme0n1 00:14:43.549 10:07:13 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:14:43.549 [ 00:14:43.549 { 00:14:43.549 "aliases": [ 00:14:43.549 "40841313-ed00-491e-88c5-4590003474d4" 00:14:43.549 ], 00:14:43.549 "assigned_rate_limits": { 00:14:43.549 "r_mbytes_per_sec": 0, 00:14:43.549 "rw_ios_per_sec": 0, 00:14:43.549 "rw_mbytes_per_sec": 0, 00:14:43.549 "w_mbytes_per_sec": 0 00:14:43.549 }, 00:14:43.549 "block_size": 4096, 00:14:43.549 "claimed": false, 00:14:43.549 "driver_specific": { 00:14:43.549 "mp_policy": "active_passive", 00:14:43.549 "nvme": [ 00:14:43.549 { 00:14:43.549 "ctrlr_data": { 00:14:43.549 "ana_reporting": false, 00:14:43.549 "cntlid": 1, 00:14:43.549 "firmware_revision": "24.01.1", 00:14:43.549 "model_number": "SPDK bdev Controller", 00:14:43.549 "multi_ctrlr": true, 00:14:43.549 "oacs": { 00:14:43.549 "firmware": 0, 00:14:43.549 "format": 0, 00:14:43.549 "ns_manage": 0, 00:14:43.549 "security": 0 00:14:43.549 }, 00:14:43.549 "serial_number": "SPDK0", 00:14:43.549 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:43.549 "vendor_id": "0x8086" 00:14:43.549 }, 00:14:43.549 "ns_data": { 00:14:43.549 "can_share": true, 00:14:43.549 "id": 1 00:14:43.549 }, 00:14:43.549 "trid": { 00:14:43.549 "adrfam": "IPv4", 00:14:43.549 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:43.549 "traddr": "10.0.0.2", 00:14:43.549 "trsvcid": "4420", 00:14:43.549 "trtype": "TCP" 00:14:43.549 }, 00:14:43.549 "vs": { 00:14:43.549 "nvme_version": "1.3" 00:14:43.549 } 00:14:43.549 } 00:14:43.549 ] 00:14:43.549 }, 00:14:43.549 "name": "Nvme0n1", 00:14:43.549 "num_blocks": 38912, 00:14:43.549 "product_name": "NVMe disk", 00:14:43.549 "supported_io_types": { 00:14:43.549 "abort": true, 00:14:43.549 "compare": true, 00:14:43.549 "compare_and_write": true, 00:14:43.549 "flush": true, 00:14:43.550 "nvme_admin": true, 00:14:43.550 "nvme_io": true, 00:14:43.550 "read": true, 00:14:43.550 "reset": true, 00:14:43.550 "unmap": true, 00:14:43.550 "write": true, 00:14:43.550 "write_zeroes": true 00:14:43.550 }, 00:14:43.550 "uuid": "40841313-ed00-491e-88c5-4590003474d4", 00:14:43.550 "zoned": false 00:14:43.550 } 00:14:43.550 ] 00:14:43.550 10:07:14 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:14:43.550 10:07:14 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=83834 00:14:43.550 10:07:14 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:14:43.807 Running I/O for 10 seconds... 00:14:44.741 Latency(us) 00:14:44.741 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:44.741 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:44.741 Nvme0n1 : 1.00 7725.00 30.18 0.00 0.00 0.00 0.00 0.00 00:14:44.741 =================================================================================================================== 00:14:44.741 Total : 7725.00 30.18 0.00 0.00 0.00 0.00 0.00 00:14:44.741 00:14:45.681 10:07:16 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d64ab936-e76c-47b7-b3fa-c565ab1a0762 00:14:45.681 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:45.681 Nvme0n1 : 2.00 7706.00 30.10 0.00 0.00 0.00 0.00 0.00 00:14:45.681 =================================================================================================================== 00:14:45.681 Total : 7706.00 30.10 0.00 0.00 0.00 0.00 0.00 00:14:45.681 00:14:45.939 true 00:14:45.939 10:07:16 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d64ab936-e76c-47b7-b3fa-c565ab1a0762 00:14:45.939 10:07:16 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:14:46.198 10:07:16 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:14:46.198 10:07:16 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:14:46.198 10:07:16 -- target/nvmf_lvs_grow.sh@65 -- # wait 83834 00:14:46.765 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:46.765 Nvme0n1 : 3.00 7695.33 30.06 0.00 0.00 0.00 0.00 0.00 00:14:46.765 =================================================================================================================== 00:14:46.765 Total : 7695.33 30.06 0.00 0.00 0.00 0.00 0.00 00:14:46.765 00:14:47.702 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:47.702 Nvme0n1 : 4.00 7675.00 29.98 0.00 0.00 0.00 0.00 0.00 00:14:47.702 =================================================================================================================== 00:14:47.702 Total : 7675.00 29.98 0.00 0.00 0.00 0.00 0.00 00:14:47.702 00:14:48.638 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:48.638 Nvme0n1 : 5.00 7648.00 29.88 0.00 0.00 0.00 0.00 0.00 00:14:48.638 =================================================================================================================== 00:14:48.638 Total : 7648.00 29.88 0.00 0.00 0.00 0.00 0.00 00:14:48.639 00:14:50.017 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.017 Nvme0n1 : 6.00 7591.33 29.65 0.00 0.00 0.00 0.00 0.00 00:14:50.017 =================================================================================================================== 00:14:50.017 Total : 7591.33 29.65 0.00 0.00 0.00 0.00 0.00 00:14:50.017 00:14:50.997 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:50.997 Nvme0n1 : 7.00 7557.71 29.52 0.00 0.00 0.00 0.00 0.00 00:14:50.997 =================================================================================================================== 00:14:50.997 Total : 7557.71 29.52 0.00 0.00 0.00 0.00 0.00 00:14:50.997 00:14:51.931 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:51.931 Nvme0n1 : 8.00 7540.62 29.46 0.00 0.00 0.00 0.00 0.00 00:14:51.931 =================================================================================================================== 00:14:51.931 Total : 7540.62 29.46 0.00 0.00 0.00 0.00 0.00 00:14:51.931 00:14:52.880 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:52.880 Nvme0n1 : 9.00 7298.78 28.51 0.00 0.00 0.00 0.00 0.00 00:14:52.880 =================================================================================================================== 00:14:52.880 Total : 7298.78 28.51 0.00 0.00 0.00 0.00 0.00 00:14:52.880 00:14:53.818 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:53.818 Nvme0n1 : 10.00 7293.70 28.49 0.00 0.00 0.00 0.00 0.00 00:14:53.818 =================================================================================================================== 00:14:53.818 Total : 7293.70 28.49 0.00 0.00 0.00 0.00 0.00 00:14:53.818 00:14:53.818 00:14:53.818 Latency(us) 00:14:53.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.818 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:14:53.818 Nvme0n1 : 10.00 7297.87 28.51 0.00 0.00 17533.16 5362.04 268816.76 00:14:53.818 =================================================================================================================== 00:14:53.818 Total : 7297.87 28.51 0.00 0.00 17533.16 5362.04 268816.76 00:14:53.818 0 00:14:53.818 10:07:24 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 83787 00:14:53.818 10:07:24 -- common/autotest_common.sh@926 -- # '[' -z 83787 ']' 00:14:53.818 10:07:24 -- common/autotest_common.sh@930 -- # kill -0 83787 00:14:53.818 10:07:24 -- common/autotest_common.sh@931 -- # uname 00:14:53.818 10:07:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:53.818 10:07:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83787 00:14:53.818 10:07:24 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:14:53.818 10:07:24 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:14:53.818 killing process with pid 83787 00:14:53.818 10:07:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83787' 00:14:53.818 Received shutdown signal, test time was about 10.000000 seconds 00:14:53.818 00:14:53.818 Latency(us) 00:14:53.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:53.818 =================================================================================================================== 00:14:53.818 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:53.818 10:07:24 -- common/autotest_common.sh@945 -- # kill 83787 00:14:53.818 10:07:24 -- common/autotest_common.sh@950 -- # wait 83787 00:14:54.077 10:07:24 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:14:54.077 10:07:24 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d64ab936-e76c-47b7-b3fa-c565ab1a0762 00:14:54.077 10:07:24 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:14:54.336 10:07:24 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:14:54.336 10:07:24 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:14:54.336 10:07:24 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 83206 00:14:54.336 10:07:24 -- target/nvmf_lvs_grow.sh@74 -- # wait 83206 00:14:54.595 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 83206 Killed "${NVMF_APP[@]}" "$@" 00:14:54.595 10:07:24 -- target/nvmf_lvs_grow.sh@74 -- # true 00:14:54.595 10:07:24 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:14:54.595 10:07:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:54.595 10:07:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:54.595 10:07:24 -- common/autotest_common.sh@10 -- # set +x 00:14:54.595 10:07:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:54.595 10:07:24 -- nvmf/common.sh@469 -- # nvmfpid=83985 00:14:54.595 10:07:24 -- nvmf/common.sh@470 -- # waitforlisten 83985 00:14:54.595 10:07:24 -- common/autotest_common.sh@819 -- # '[' -z 83985 ']' 00:14:54.595 10:07:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:54.595 10:07:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:54.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:54.595 10:07:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:54.595 10:07:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:54.595 10:07:24 -- common/autotest_common.sh@10 -- # set +x 00:14:54.595 [2024-07-14 10:07:24.955670] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:14:54.595 [2024-07-14 10:07:24.955752] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:54.595 [2024-07-14 10:07:25.087827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:54.853 [2024-07-14 10:07:25.159741] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:54.853 [2024-07-14 10:07:25.159909] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:54.853 [2024-07-14 10:07:25.159922] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:54.853 [2024-07-14 10:07:25.159931] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:54.853 [2024-07-14 10:07:25.159956] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:55.421 10:07:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:55.421 10:07:25 -- common/autotest_common.sh@852 -- # return 0 00:14:55.421 10:07:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:55.421 10:07:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:55.421 10:07:25 -- common/autotest_common.sh@10 -- # set +x 00:14:55.421 10:07:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:55.421 10:07:25 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:55.680 [2024-07-14 10:07:26.042302] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:14:55.680 [2024-07-14 10:07:26.042726] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:14:55.680 [2024-07-14 10:07:26.042934] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:14:55.681 10:07:26 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:14:55.681 10:07:26 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 40841313-ed00-491e-88c5-4590003474d4 00:14:55.681 10:07:26 -- common/autotest_common.sh@887 -- # local bdev_name=40841313-ed00-491e-88c5-4590003474d4 00:14:55.681 10:07:26 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:55.681 10:07:26 -- common/autotest_common.sh@889 -- # local i 00:14:55.681 10:07:26 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:55.681 10:07:26 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:55.681 10:07:26 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:55.940 10:07:26 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 40841313-ed00-491e-88c5-4590003474d4 -t 2000 00:14:55.940 [ 00:14:55.940 { 00:14:55.940 "aliases": [ 00:14:55.940 "lvs/lvol" 00:14:55.940 ], 00:14:55.940 "assigned_rate_limits": { 00:14:55.940 "r_mbytes_per_sec": 0, 00:14:55.940 "rw_ios_per_sec": 0, 00:14:55.940 "rw_mbytes_per_sec": 0, 00:14:55.940 "w_mbytes_per_sec": 0 00:14:55.940 }, 00:14:55.940 "block_size": 4096, 00:14:55.940 "claimed": false, 00:14:55.940 "driver_specific": { 00:14:55.940 "lvol": { 00:14:55.940 "base_bdev": "aio_bdev", 00:14:55.940 "clone": false, 00:14:55.940 "esnap_clone": false, 00:14:55.940 "lvol_store_uuid": "d64ab936-e76c-47b7-b3fa-c565ab1a0762", 00:14:55.940 "snapshot": false, 00:14:55.940 "thin_provision": false 00:14:55.940 } 00:14:55.940 }, 00:14:55.940 "name": "40841313-ed00-491e-88c5-4590003474d4", 00:14:55.940 "num_blocks": 38912, 00:14:55.940 "product_name": "Logical Volume", 00:14:55.940 "supported_io_types": { 00:14:55.940 "abort": false, 00:14:55.940 "compare": false, 00:14:55.940 "compare_and_write": false, 00:14:55.940 "flush": false, 00:14:55.940 "nvme_admin": false, 00:14:55.940 "nvme_io": false, 00:14:55.940 "read": true, 00:14:55.940 "reset": true, 00:14:55.940 "unmap": true, 00:14:55.940 "write": true, 00:14:55.940 "write_zeroes": true 00:14:55.940 }, 00:14:55.940 "uuid": "40841313-ed00-491e-88c5-4590003474d4", 00:14:55.940 "zoned": false 00:14:55.940 } 00:14:55.940 ] 00:14:55.940 10:07:26 -- common/autotest_common.sh@895 -- # return 0 00:14:55.940 10:07:26 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:14:55.940 10:07:26 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d64ab936-e76c-47b7-b3fa-c565ab1a0762 00:14:56.508 10:07:26 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:14:56.508 10:07:26 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d64ab936-e76c-47b7-b3fa-c565ab1a0762 00:14:56.508 10:07:26 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:14:56.508 10:07:27 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:14:56.508 10:07:27 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:56.768 [2024-07-14 10:07:27.183695] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:14:56.768 10:07:27 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d64ab936-e76c-47b7-b3fa-c565ab1a0762 00:14:56.768 10:07:27 -- common/autotest_common.sh@640 -- # local es=0 00:14:56.768 10:07:27 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d64ab936-e76c-47b7-b3fa-c565ab1a0762 00:14:56.768 10:07:27 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:56.768 10:07:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:56.768 10:07:27 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:56.768 10:07:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:56.768 10:07:27 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:56.768 10:07:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:14:56.768 10:07:27 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:56.768 10:07:27 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:14:56.768 10:07:27 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d64ab936-e76c-47b7-b3fa-c565ab1a0762 00:14:57.027 2024/07/14 10:07:27 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:d64ab936-e76c-47b7-b3fa-c565ab1a0762], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:14:57.027 request: 00:14:57.027 { 00:14:57.027 "method": "bdev_lvol_get_lvstores", 00:14:57.027 "params": { 00:14:57.027 "uuid": "d64ab936-e76c-47b7-b3fa-c565ab1a0762" 00:14:57.027 } 00:14:57.027 } 00:14:57.027 Got JSON-RPC error response 00:14:57.027 GoRPCClient: error on JSON-RPC call 00:14:57.027 10:07:27 -- common/autotest_common.sh@643 -- # es=1 00:14:57.027 10:07:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:14:57.027 10:07:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:14:57.027 10:07:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:14:57.027 10:07:27 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:14:57.286 aio_bdev 00:14:57.286 10:07:27 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 40841313-ed00-491e-88c5-4590003474d4 00:14:57.286 10:07:27 -- common/autotest_common.sh@887 -- # local bdev_name=40841313-ed00-491e-88c5-4590003474d4 00:14:57.286 10:07:27 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:14:57.286 10:07:27 -- common/autotest_common.sh@889 -- # local i 00:14:57.286 10:07:27 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:14:57.286 10:07:27 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:14:57.286 10:07:27 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:14:57.545 10:07:27 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 40841313-ed00-491e-88c5-4590003474d4 -t 2000 00:14:57.545 [ 00:14:57.545 { 00:14:57.545 "aliases": [ 00:14:57.545 "lvs/lvol" 00:14:57.545 ], 00:14:57.545 "assigned_rate_limits": { 00:14:57.545 "r_mbytes_per_sec": 0, 00:14:57.545 "rw_ios_per_sec": 0, 00:14:57.545 "rw_mbytes_per_sec": 0, 00:14:57.545 "w_mbytes_per_sec": 0 00:14:57.545 }, 00:14:57.545 "block_size": 4096, 00:14:57.545 "claimed": false, 00:14:57.545 "driver_specific": { 00:14:57.545 "lvol": { 00:14:57.545 "base_bdev": "aio_bdev", 00:14:57.545 "clone": false, 00:14:57.545 "esnap_clone": false, 00:14:57.545 "lvol_store_uuid": "d64ab936-e76c-47b7-b3fa-c565ab1a0762", 00:14:57.545 "snapshot": false, 00:14:57.545 "thin_provision": false 00:14:57.545 } 00:14:57.545 }, 00:14:57.545 "name": "40841313-ed00-491e-88c5-4590003474d4", 00:14:57.545 "num_blocks": 38912, 00:14:57.545 "product_name": "Logical Volume", 00:14:57.545 "supported_io_types": { 00:14:57.545 "abort": false, 00:14:57.545 "compare": false, 00:14:57.545 "compare_and_write": false, 00:14:57.545 "flush": false, 00:14:57.545 "nvme_admin": false, 00:14:57.545 "nvme_io": false, 00:14:57.545 "read": true, 00:14:57.545 "reset": true, 00:14:57.545 "unmap": true, 00:14:57.545 "write": true, 00:14:57.545 "write_zeroes": true 00:14:57.545 }, 00:14:57.545 "uuid": "40841313-ed00-491e-88c5-4590003474d4", 00:14:57.545 "zoned": false 00:14:57.545 } 00:14:57.545 ] 00:14:57.804 10:07:28 -- common/autotest_common.sh@895 -- # return 0 00:14:57.804 10:07:28 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d64ab936-e76c-47b7-b3fa-c565ab1a0762 00:14:57.804 10:07:28 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:14:57.804 10:07:28 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:14:57.804 10:07:28 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d64ab936-e76c-47b7-b3fa-c565ab1a0762 00:14:57.804 10:07:28 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:14:58.063 10:07:28 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:14:58.063 10:07:28 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 40841313-ed00-491e-88c5-4590003474d4 00:14:58.321 10:07:28 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d64ab936-e76c-47b7-b3fa-c565ab1a0762 00:14:58.579 10:07:28 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:14:58.838 10:07:29 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:14:59.097 ************************************ 00:14:59.097 END TEST lvs_grow_dirty 00:14:59.097 ************************************ 00:14:59.097 00:14:59.097 real 0m19.073s 00:14:59.097 user 0m37.248s 00:14:59.097 sys 0m9.860s 00:14:59.097 10:07:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:59.097 10:07:29 -- common/autotest_common.sh@10 -- # set +x 00:14:59.097 10:07:29 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:14:59.097 10:07:29 -- common/autotest_common.sh@796 -- # type=--id 00:14:59.097 10:07:29 -- common/autotest_common.sh@797 -- # id=0 00:14:59.097 10:07:29 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:14:59.097 10:07:29 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:14:59.097 10:07:29 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:14:59.097 10:07:29 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:14:59.097 10:07:29 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:14:59.097 10:07:29 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:14:59.097 nvmf_trace.0 00:14:59.097 10:07:29 -- common/autotest_common.sh@811 -- # return 0 00:14:59.097 10:07:29 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:14:59.097 10:07:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:59.097 10:07:29 -- nvmf/common.sh@116 -- # sync 00:14:59.356 10:07:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:59.356 10:07:29 -- nvmf/common.sh@119 -- # set +e 00:14:59.356 10:07:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:59.356 10:07:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:59.356 rmmod nvme_tcp 00:14:59.356 rmmod nvme_fabrics 00:14:59.356 rmmod nvme_keyring 00:14:59.356 10:07:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:59.356 10:07:29 -- nvmf/common.sh@123 -- # set -e 00:14:59.356 10:07:29 -- nvmf/common.sh@124 -- # return 0 00:14:59.356 10:07:29 -- nvmf/common.sh@477 -- # '[' -n 83985 ']' 00:14:59.356 10:07:29 -- nvmf/common.sh@478 -- # killprocess 83985 00:14:59.356 10:07:29 -- common/autotest_common.sh@926 -- # '[' -z 83985 ']' 00:14:59.356 10:07:29 -- common/autotest_common.sh@930 -- # kill -0 83985 00:14:59.356 10:07:29 -- common/autotest_common.sh@931 -- # uname 00:14:59.356 10:07:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:59.356 10:07:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 83985 00:14:59.356 killing process with pid 83985 00:14:59.356 10:07:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:59.356 10:07:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:59.356 10:07:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 83985' 00:14:59.356 10:07:29 -- common/autotest_common.sh@945 -- # kill 83985 00:14:59.356 10:07:29 -- common/autotest_common.sh@950 -- # wait 83985 00:14:59.615 10:07:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:59.615 10:07:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:59.615 10:07:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:59.615 10:07:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:59.615 10:07:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:59.615 10:07:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.615 10:07:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:59.615 10:07:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.615 10:07:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:59.615 00:14:59.615 real 0m38.414s 00:14:59.615 user 0m59.262s 00:14:59.615 sys 0m12.526s 00:14:59.615 10:07:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:59.615 10:07:30 -- common/autotest_common.sh@10 -- # set +x 00:14:59.615 ************************************ 00:14:59.615 END TEST nvmf_lvs_grow 00:14:59.615 ************************************ 00:14:59.615 10:07:30 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:59.615 10:07:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:59.615 10:07:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:59.615 10:07:30 -- common/autotest_common.sh@10 -- # set +x 00:14:59.615 ************************************ 00:14:59.615 START TEST nvmf_bdev_io_wait 00:14:59.615 ************************************ 00:14:59.615 10:07:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:14:59.875 * Looking for test storage... 00:14:59.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:14:59.875 10:07:30 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:59.875 10:07:30 -- nvmf/common.sh@7 -- # uname -s 00:14:59.875 10:07:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:59.875 10:07:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:59.875 10:07:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:59.875 10:07:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:59.875 10:07:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:59.875 10:07:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:59.875 10:07:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:59.875 10:07:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:59.875 10:07:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:59.875 10:07:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:59.875 10:07:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:14:59.875 10:07:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:14:59.875 10:07:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:59.875 10:07:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:59.875 10:07:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:59.875 10:07:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:59.875 10:07:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:59.875 10:07:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:59.875 10:07:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:59.875 10:07:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.875 10:07:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.875 10:07:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.875 10:07:30 -- paths/export.sh@5 -- # export PATH 00:14:59.875 10:07:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:59.875 10:07:30 -- nvmf/common.sh@46 -- # : 0 00:14:59.875 10:07:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:59.875 10:07:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:59.875 10:07:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:59.875 10:07:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:59.875 10:07:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:59.875 10:07:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:59.875 10:07:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:59.875 10:07:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:59.875 10:07:30 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:59.875 10:07:30 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:59.875 10:07:30 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:14:59.875 10:07:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:59.875 10:07:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:59.875 10:07:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:59.875 10:07:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:59.875 10:07:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:59.875 10:07:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:59.875 10:07:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:59.875 10:07:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:59.875 10:07:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:59.875 10:07:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:59.875 10:07:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:59.875 10:07:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:59.875 10:07:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:59.875 10:07:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:59.875 10:07:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:59.875 10:07:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:59.875 10:07:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:59.875 10:07:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:59.875 10:07:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:59.875 10:07:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:59.875 10:07:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:59.875 10:07:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:59.875 10:07:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:59.875 10:07:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:59.875 10:07:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:59.875 10:07:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:59.875 10:07:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:59.875 10:07:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:59.875 Cannot find device "nvmf_tgt_br" 00:14:59.875 10:07:30 -- nvmf/common.sh@154 -- # true 00:14:59.875 10:07:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:59.875 Cannot find device "nvmf_tgt_br2" 00:14:59.875 10:07:30 -- nvmf/common.sh@155 -- # true 00:14:59.875 10:07:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:59.875 10:07:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:59.875 Cannot find device "nvmf_tgt_br" 00:14:59.875 10:07:30 -- nvmf/common.sh@157 -- # true 00:14:59.875 10:07:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:59.875 Cannot find device "nvmf_tgt_br2" 00:14:59.875 10:07:30 -- nvmf/common.sh@158 -- # true 00:14:59.875 10:07:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:59.875 10:07:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:59.875 10:07:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:59.875 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:59.875 10:07:30 -- nvmf/common.sh@161 -- # true 00:14:59.875 10:07:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:59.875 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:59.875 10:07:30 -- nvmf/common.sh@162 -- # true 00:14:59.875 10:07:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:59.875 10:07:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:59.875 10:07:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:59.875 10:07:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:59.875 10:07:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:59.875 10:07:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:59.875 10:07:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:59.875 10:07:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:59.875 10:07:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:00.135 10:07:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:00.135 10:07:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:00.135 10:07:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:00.135 10:07:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:00.135 10:07:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:00.135 10:07:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:00.135 10:07:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:00.135 10:07:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:00.135 10:07:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:00.135 10:07:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:00.135 10:07:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:00.135 10:07:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:00.135 10:07:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:00.135 10:07:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:00.135 10:07:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:00.135 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:00.135 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:15:00.135 00:15:00.135 --- 10.0.0.2 ping statistics --- 00:15:00.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.135 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:15:00.135 10:07:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:00.135 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:00.135 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:15:00.135 00:15:00.135 --- 10.0.0.3 ping statistics --- 00:15:00.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.135 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:15:00.135 10:07:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:00.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:00.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:15:00.135 00:15:00.135 --- 10.0.0.1 ping statistics --- 00:15:00.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:00.135 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:15:00.135 10:07:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:00.135 10:07:30 -- nvmf/common.sh@421 -- # return 0 00:15:00.135 10:07:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:00.135 10:07:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:00.135 10:07:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:00.135 10:07:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:00.135 10:07:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:00.135 10:07:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:00.135 10:07:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:00.135 10:07:30 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:00.135 10:07:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:00.135 10:07:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:00.135 10:07:30 -- common/autotest_common.sh@10 -- # set +x 00:15:00.135 10:07:30 -- nvmf/common.sh@469 -- # nvmfpid=84394 00:15:00.135 10:07:30 -- nvmf/common.sh@470 -- # waitforlisten 84394 00:15:00.135 10:07:30 -- common/autotest_common.sh@819 -- # '[' -z 84394 ']' 00:15:00.135 10:07:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:00.135 10:07:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.135 10:07:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:00.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.135 10:07:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.135 10:07:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:00.135 10:07:30 -- common/autotest_common.sh@10 -- # set +x 00:15:00.135 [2024-07-14 10:07:30.619171] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:00.135 [2024-07-14 10:07:30.619281] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.394 [2024-07-14 10:07:30.759418] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:00.394 [2024-07-14 10:07:30.827757] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:00.394 [2024-07-14 10:07:30.828175] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.394 [2024-07-14 10:07:30.828395] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.394 [2024-07-14 10:07:30.828527] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.394 [2024-07-14 10:07:30.828910] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.394 [2024-07-14 10:07:30.829034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:00.394 [2024-07-14 10:07:30.829113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:00.394 [2024-07-14 10:07:30.829115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.331 10:07:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:01.331 10:07:31 -- common/autotest_common.sh@852 -- # return 0 00:15:01.331 10:07:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:01.331 10:07:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:01.331 10:07:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.331 10:07:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:01.331 10:07:31 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:01.331 10:07:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.331 10:07:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.331 10:07:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.331 10:07:31 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:01.331 10:07:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.331 10:07:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.331 10:07:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.331 10:07:31 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:01.331 10:07:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.331 10:07:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.331 [2024-07-14 10:07:31.712916] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:01.331 10:07:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.331 10:07:31 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:01.331 10:07:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.331 10:07:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.331 Malloc0 00:15:01.331 10:07:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.331 10:07:31 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:01.331 10:07:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.331 10:07:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.331 10:07:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.331 10:07:31 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:01.331 10:07:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.331 10:07:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.331 10:07:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.331 10:07:31 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:01.331 10:07:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:01.331 10:07:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.331 [2024-07-14 10:07:31.767430] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:01.331 10:07:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:01.331 10:07:31 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=84447 00:15:01.331 10:07:31 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:01.331 10:07:31 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:01.331 10:07:31 -- target/bdev_io_wait.sh@30 -- # READ_PID=84449 00:15:01.331 10:07:31 -- nvmf/common.sh@520 -- # config=() 00:15:01.331 10:07:31 -- nvmf/common.sh@520 -- # local subsystem config 00:15:01.331 10:07:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:01.331 10:07:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:01.331 { 00:15:01.331 "params": { 00:15:01.331 "name": "Nvme$subsystem", 00:15:01.331 "trtype": "$TEST_TRANSPORT", 00:15:01.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:01.331 "adrfam": "ipv4", 00:15:01.331 "trsvcid": "$NVMF_PORT", 00:15:01.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:01.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:01.331 "hdgst": ${hdgst:-false}, 00:15:01.331 "ddgst": ${ddgst:-false} 00:15:01.331 }, 00:15:01.331 "method": "bdev_nvme_attach_controller" 00:15:01.331 } 00:15:01.331 EOF 00:15:01.331 )") 00:15:01.331 10:07:31 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:01.331 10:07:31 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:01.331 10:07:31 -- nvmf/common.sh@520 -- # config=() 00:15:01.331 10:07:31 -- nvmf/common.sh@520 -- # local subsystem config 00:15:01.331 10:07:31 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=84451 00:15:01.331 10:07:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:01.331 10:07:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:01.331 { 00:15:01.331 "params": { 00:15:01.331 "name": "Nvme$subsystem", 00:15:01.331 "trtype": "$TEST_TRANSPORT", 00:15:01.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:01.331 "adrfam": "ipv4", 00:15:01.331 "trsvcid": "$NVMF_PORT", 00:15:01.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:01.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:01.331 "hdgst": ${hdgst:-false}, 00:15:01.331 "ddgst": ${ddgst:-false} 00:15:01.331 }, 00:15:01.331 "method": "bdev_nvme_attach_controller" 00:15:01.331 } 00:15:01.331 EOF 00:15:01.331 )") 00:15:01.331 10:07:31 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:01.331 10:07:31 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=84454 00:15:01.331 10:07:31 -- nvmf/common.sh@542 -- # cat 00:15:01.331 10:07:31 -- nvmf/common.sh@542 -- # cat 00:15:01.331 10:07:31 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:01.331 10:07:31 -- nvmf/common.sh@520 -- # config=() 00:15:01.331 10:07:31 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:01.331 10:07:31 -- nvmf/common.sh@520 -- # local subsystem config 00:15:01.331 10:07:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:01.331 10:07:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:01.331 { 00:15:01.331 "params": { 00:15:01.331 "name": "Nvme$subsystem", 00:15:01.331 "trtype": "$TEST_TRANSPORT", 00:15:01.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:01.331 "adrfam": "ipv4", 00:15:01.331 "trsvcid": "$NVMF_PORT", 00:15:01.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:01.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:01.331 "hdgst": ${hdgst:-false}, 00:15:01.331 "ddgst": ${ddgst:-false} 00:15:01.331 }, 00:15:01.331 "method": "bdev_nvme_attach_controller" 00:15:01.331 } 00:15:01.331 EOF 00:15:01.331 )") 00:15:01.331 10:07:31 -- target/bdev_io_wait.sh@35 -- # sync 00:15:01.332 10:07:31 -- nvmf/common.sh@542 -- # cat 00:15:01.332 10:07:31 -- nvmf/common.sh@544 -- # jq . 00:15:01.332 10:07:31 -- nvmf/common.sh@544 -- # jq . 00:15:01.332 10:07:31 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:01.332 10:07:31 -- nvmf/common.sh@520 -- # config=() 00:15:01.332 10:07:31 -- nvmf/common.sh@520 -- # local subsystem config 00:15:01.332 10:07:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:01.332 10:07:31 -- nvmf/common.sh@545 -- # IFS=, 00:15:01.332 10:07:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:01.332 { 00:15:01.332 "params": { 00:15:01.332 "name": "Nvme$subsystem", 00:15:01.332 "trtype": "$TEST_TRANSPORT", 00:15:01.332 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:01.332 "adrfam": "ipv4", 00:15:01.332 "trsvcid": "$NVMF_PORT", 00:15:01.332 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:01.332 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:01.332 "hdgst": ${hdgst:-false}, 00:15:01.332 "ddgst": ${ddgst:-false} 00:15:01.332 }, 00:15:01.332 "method": "bdev_nvme_attach_controller" 00:15:01.332 } 00:15:01.332 EOF 00:15:01.332 )") 00:15:01.332 10:07:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:01.332 "params": { 00:15:01.332 "name": "Nvme1", 00:15:01.332 "trtype": "tcp", 00:15:01.332 "traddr": "10.0.0.2", 00:15:01.332 "adrfam": "ipv4", 00:15:01.332 "trsvcid": "4420", 00:15:01.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:01.332 "hdgst": false, 00:15:01.332 "ddgst": false 00:15:01.332 }, 00:15:01.332 "method": "bdev_nvme_attach_controller" 00:15:01.332 }' 00:15:01.332 10:07:31 -- nvmf/common.sh@545 -- # IFS=, 00:15:01.332 10:07:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:01.332 "params": { 00:15:01.332 "name": "Nvme1", 00:15:01.332 "trtype": "tcp", 00:15:01.332 "traddr": "10.0.0.2", 00:15:01.332 "adrfam": "ipv4", 00:15:01.332 "trsvcid": "4420", 00:15:01.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:01.332 "hdgst": false, 00:15:01.332 "ddgst": false 00:15:01.332 }, 00:15:01.332 "method": "bdev_nvme_attach_controller" 00:15:01.332 }' 00:15:01.332 10:07:31 -- nvmf/common.sh@542 -- # cat 00:15:01.332 10:07:31 -- nvmf/common.sh@544 -- # jq . 00:15:01.332 10:07:31 -- nvmf/common.sh@545 -- # IFS=, 00:15:01.332 10:07:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:01.332 "params": { 00:15:01.332 "name": "Nvme1", 00:15:01.332 "trtype": "tcp", 00:15:01.332 "traddr": "10.0.0.2", 00:15:01.332 "adrfam": "ipv4", 00:15:01.332 "trsvcid": "4420", 00:15:01.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:01.332 "hdgst": false, 00:15:01.332 "ddgst": false 00:15:01.332 }, 00:15:01.332 "method": "bdev_nvme_attach_controller" 00:15:01.332 }' 00:15:01.332 10:07:31 -- nvmf/common.sh@544 -- # jq . 00:15:01.332 10:07:31 -- nvmf/common.sh@545 -- # IFS=, 00:15:01.332 10:07:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:01.332 "params": { 00:15:01.332 "name": "Nvme1", 00:15:01.332 "trtype": "tcp", 00:15:01.332 "traddr": "10.0.0.2", 00:15:01.332 "adrfam": "ipv4", 00:15:01.332 "trsvcid": "4420", 00:15:01.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:01.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:01.332 "hdgst": false, 00:15:01.332 "ddgst": false 00:15:01.332 }, 00:15:01.332 "method": "bdev_nvme_attach_controller" 00:15:01.332 }' 00:15:01.332 [2024-07-14 10:07:31.831180] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:01.332 [2024-07-14 10:07:31.831441] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:01.332 [2024-07-14 10:07:31.833005] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:01.332 [2024-07-14 10:07:31.833254] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:01.332 [2024-07-14 10:07:31.844873] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:01.332 [2024-07-14 10:07:31.844946] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:01.332 10:07:31 -- target/bdev_io_wait.sh@37 -- # wait 84447 00:15:01.332 [2024-07-14 10:07:31.859481] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:01.332 [2024-07-14 10:07:31.859730] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:01.591 [2024-07-14 10:07:32.047145] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.591 [2024-07-14 10:07:32.121653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:15:01.591 [2024-07-14 10:07:32.125106] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.850 [2024-07-14 10:07:32.198270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.850 [2024-07-14 10:07:32.200149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:15:01.850 [2024-07-14 10:07:32.270060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:01.850 [2024-07-14 10:07:32.273769] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:01.850 Running I/O for 1 seconds... 00:15:01.850 [2024-07-14 10:07:32.344675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:15:01.850 Running I/O for 1 seconds... 00:15:02.108 Running I/O for 1 seconds... 00:15:02.108 Running I/O for 1 seconds... 00:15:03.043 00:15:03.043 Latency(us) 00:15:03.043 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.043 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:03.043 Nvme1n1 : 1.02 7110.53 27.78 0.00 0.00 17850.50 9472.93 30027.40 00:15:03.043 =================================================================================================================== 00:15:03.043 Total : 7110.53 27.78 0.00 0.00 17850.50 9472.93 30027.40 00:15:03.043 00:15:03.043 Latency(us) 00:15:03.043 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.043 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:03.043 Nvme1n1 : 1.00 202537.21 791.16 0.00 0.00 629.35 262.52 1176.67 00:15:03.043 =================================================================================================================== 00:15:03.043 Total : 202537.21 791.16 0.00 0.00 629.35 262.52 1176.67 00:15:03.043 00:15:03.043 Latency(us) 00:15:03.043 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.043 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:03.043 Nvme1n1 : 1.01 9502.20 37.12 0.00 0.00 13412.21 7268.54 23831.27 00:15:03.043 =================================================================================================================== 00:15:03.043 Total : 9502.20 37.12 0.00 0.00 13412.21 7268.54 23831.27 00:15:03.043 00:15:03.043 Latency(us) 00:15:03.043 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:03.043 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:03.043 Nvme1n1 : 1.00 7397.79 28.90 0.00 0.00 17253.55 4766.25 45994.36 00:15:03.043 =================================================================================================================== 00:15:03.043 Total : 7397.79 28.90 0.00 0.00 17253.55 4766.25 45994.36 00:15:03.302 10:07:33 -- target/bdev_io_wait.sh@38 -- # wait 84449 00:15:03.302 10:07:33 -- target/bdev_io_wait.sh@39 -- # wait 84451 00:15:03.302 10:07:33 -- target/bdev_io_wait.sh@40 -- # wait 84454 00:15:03.302 10:07:33 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:03.302 10:07:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:03.302 10:07:33 -- common/autotest_common.sh@10 -- # set +x 00:15:03.302 10:07:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:03.302 10:07:33 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:03.302 10:07:33 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:03.302 10:07:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:03.302 10:07:33 -- nvmf/common.sh@116 -- # sync 00:15:03.302 10:07:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:03.302 10:07:33 -- nvmf/common.sh@119 -- # set +e 00:15:03.302 10:07:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:03.302 10:07:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:03.302 rmmod nvme_tcp 00:15:03.302 rmmod nvme_fabrics 00:15:03.302 rmmod nvme_keyring 00:15:03.302 10:07:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:03.560 10:07:33 -- nvmf/common.sh@123 -- # set -e 00:15:03.560 10:07:33 -- nvmf/common.sh@124 -- # return 0 00:15:03.560 10:07:33 -- nvmf/common.sh@477 -- # '[' -n 84394 ']' 00:15:03.560 10:07:33 -- nvmf/common.sh@478 -- # killprocess 84394 00:15:03.560 10:07:33 -- common/autotest_common.sh@926 -- # '[' -z 84394 ']' 00:15:03.560 10:07:33 -- common/autotest_common.sh@930 -- # kill -0 84394 00:15:03.560 10:07:33 -- common/autotest_common.sh@931 -- # uname 00:15:03.560 10:07:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:03.560 10:07:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84394 00:15:03.560 killing process with pid 84394 00:15:03.560 10:07:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:03.560 10:07:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:03.560 10:07:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84394' 00:15:03.560 10:07:33 -- common/autotest_common.sh@945 -- # kill 84394 00:15:03.560 10:07:33 -- common/autotest_common.sh@950 -- # wait 84394 00:15:03.560 10:07:34 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:03.560 10:07:34 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:03.560 10:07:34 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:03.560 10:07:34 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:03.560 10:07:34 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:03.560 10:07:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.560 10:07:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:03.560 10:07:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.560 10:07:34 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:03.560 ************************************ 00:15:03.560 END TEST nvmf_bdev_io_wait 00:15:03.561 ************************************ 00:15:03.561 00:15:03.561 real 0m3.975s 00:15:03.561 user 0m17.842s 00:15:03.561 sys 0m1.963s 00:15:03.561 10:07:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:03.561 10:07:34 -- common/autotest_common.sh@10 -- # set +x 00:15:03.819 10:07:34 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:03.819 10:07:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:03.819 10:07:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:03.819 10:07:34 -- common/autotest_common.sh@10 -- # set +x 00:15:03.819 ************************************ 00:15:03.819 START TEST nvmf_queue_depth 00:15:03.819 ************************************ 00:15:03.819 10:07:34 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:15:03.819 * Looking for test storage... 00:15:03.819 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:03.819 10:07:34 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:03.819 10:07:34 -- nvmf/common.sh@7 -- # uname -s 00:15:03.819 10:07:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:03.819 10:07:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:03.819 10:07:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:03.819 10:07:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:03.819 10:07:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:03.819 10:07:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:03.819 10:07:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:03.819 10:07:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:03.819 10:07:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:03.819 10:07:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:03.819 10:07:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:15:03.819 10:07:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:15:03.819 10:07:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:03.819 10:07:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:03.819 10:07:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:03.819 10:07:34 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:03.819 10:07:34 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:03.819 10:07:34 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:03.819 10:07:34 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:03.819 10:07:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.819 10:07:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.819 10:07:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.819 10:07:34 -- paths/export.sh@5 -- # export PATH 00:15:03.819 10:07:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.819 10:07:34 -- nvmf/common.sh@46 -- # : 0 00:15:03.819 10:07:34 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:03.819 10:07:34 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:03.819 10:07:34 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:03.819 10:07:34 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:03.819 10:07:34 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:03.819 10:07:34 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:03.819 10:07:34 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:03.819 10:07:34 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:03.819 10:07:34 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:15:03.819 10:07:34 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:15:03.819 10:07:34 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:03.819 10:07:34 -- target/queue_depth.sh@19 -- # nvmftestinit 00:15:03.819 10:07:34 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:03.819 10:07:34 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:03.819 10:07:34 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:03.819 10:07:34 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:03.819 10:07:34 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:03.819 10:07:34 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:03.819 10:07:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:03.819 10:07:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.819 10:07:34 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:03.819 10:07:34 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:03.819 10:07:34 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:03.819 10:07:34 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:03.819 10:07:34 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:03.819 10:07:34 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:03.819 10:07:34 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:03.819 10:07:34 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:03.819 10:07:34 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:03.819 10:07:34 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:03.819 10:07:34 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:03.819 10:07:34 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:03.819 10:07:34 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:03.819 10:07:34 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:03.819 10:07:34 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:03.819 10:07:34 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:03.819 10:07:34 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:03.819 10:07:34 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:03.819 10:07:34 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:03.819 10:07:34 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:03.819 Cannot find device "nvmf_tgt_br" 00:15:03.819 10:07:34 -- nvmf/common.sh@154 -- # true 00:15:03.819 10:07:34 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:03.819 Cannot find device "nvmf_tgt_br2" 00:15:03.819 10:07:34 -- nvmf/common.sh@155 -- # true 00:15:03.819 10:07:34 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:03.819 10:07:34 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:03.819 Cannot find device "nvmf_tgt_br" 00:15:03.819 10:07:34 -- nvmf/common.sh@157 -- # true 00:15:03.819 10:07:34 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:03.819 Cannot find device "nvmf_tgt_br2" 00:15:03.819 10:07:34 -- nvmf/common.sh@158 -- # true 00:15:03.819 10:07:34 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:03.819 10:07:34 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:04.078 10:07:34 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:04.078 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:04.078 10:07:34 -- nvmf/common.sh@161 -- # true 00:15:04.078 10:07:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:04.078 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:04.078 10:07:34 -- nvmf/common.sh@162 -- # true 00:15:04.078 10:07:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:04.078 10:07:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:04.078 10:07:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:04.078 10:07:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:04.078 10:07:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:04.078 10:07:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:04.078 10:07:34 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:04.078 10:07:34 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:04.078 10:07:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:04.078 10:07:34 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:04.078 10:07:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:04.078 10:07:34 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:04.078 10:07:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:04.078 10:07:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:04.078 10:07:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:04.078 10:07:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:04.078 10:07:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:04.078 10:07:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:04.078 10:07:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:04.078 10:07:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:04.078 10:07:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:04.078 10:07:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:04.078 10:07:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:04.078 10:07:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:04.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:04.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:15:04.078 00:15:04.078 --- 10.0.0.2 ping statistics --- 00:15:04.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.078 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:15:04.078 10:07:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:04.078 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:04.078 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:15:04.078 00:15:04.078 --- 10.0.0.3 ping statistics --- 00:15:04.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.078 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:15:04.078 10:07:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:04.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:04.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:04.078 00:15:04.078 --- 10.0.0.1 ping statistics --- 00:15:04.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:04.078 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:04.078 10:07:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:04.078 10:07:34 -- nvmf/common.sh@421 -- # return 0 00:15:04.078 10:07:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:04.078 10:07:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:04.078 10:07:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:04.078 10:07:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:04.078 10:07:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:04.078 10:07:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:04.078 10:07:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:04.078 10:07:34 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:15:04.078 10:07:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:04.078 10:07:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:04.078 10:07:34 -- common/autotest_common.sh@10 -- # set +x 00:15:04.078 10:07:34 -- nvmf/common.sh@469 -- # nvmfpid=84693 00:15:04.078 10:07:34 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:04.078 10:07:34 -- nvmf/common.sh@470 -- # waitforlisten 84693 00:15:04.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:04.078 10:07:34 -- common/autotest_common.sh@819 -- # '[' -z 84693 ']' 00:15:04.078 10:07:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:04.078 10:07:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:04.078 10:07:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:04.078 10:07:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:04.078 10:07:34 -- common/autotest_common.sh@10 -- # set +x 00:15:04.335 [2024-07-14 10:07:34.630594] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:04.335 [2024-07-14 10:07:34.630865] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:04.335 [2024-07-14 10:07:34.771154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:04.335 [2024-07-14 10:07:34.826333] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:04.335 [2024-07-14 10:07:34.826826] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:04.335 [2024-07-14 10:07:34.826960] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:04.335 [2024-07-14 10:07:34.827083] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:04.335 [2024-07-14 10:07:34.827288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:05.280 10:07:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:05.280 10:07:35 -- common/autotest_common.sh@852 -- # return 0 00:15:05.280 10:07:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:05.280 10:07:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:05.280 10:07:35 -- common/autotest_common.sh@10 -- # set +x 00:15:05.280 10:07:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:05.280 10:07:35 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:05.280 10:07:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:05.280 10:07:35 -- common/autotest_common.sh@10 -- # set +x 00:15:05.280 [2024-07-14 10:07:35.685441] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:05.280 10:07:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:05.280 10:07:35 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:05.280 10:07:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:05.280 10:07:35 -- common/autotest_common.sh@10 -- # set +x 00:15:05.280 Malloc0 00:15:05.280 10:07:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:05.280 10:07:35 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:05.280 10:07:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:05.280 10:07:35 -- common/autotest_common.sh@10 -- # set +x 00:15:05.280 10:07:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:05.281 10:07:35 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:05.281 10:07:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:05.281 10:07:35 -- common/autotest_common.sh@10 -- # set +x 00:15:05.281 10:07:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:05.281 10:07:35 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:05.281 10:07:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:05.281 10:07:35 -- common/autotest_common.sh@10 -- # set +x 00:15:05.281 [2024-07-14 10:07:35.751730] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.281 10:07:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:05.281 10:07:35 -- target/queue_depth.sh@30 -- # bdevperf_pid=84743 00:15:05.281 10:07:35 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:15:05.281 10:07:35 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:05.281 10:07:35 -- target/queue_depth.sh@33 -- # waitforlisten 84743 /var/tmp/bdevperf.sock 00:15:05.281 10:07:35 -- common/autotest_common.sh@819 -- # '[' -z 84743 ']' 00:15:05.281 10:07:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:05.281 10:07:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:05.281 10:07:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:05.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:05.281 10:07:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:05.281 10:07:35 -- common/autotest_common.sh@10 -- # set +x 00:15:05.281 [2024-07-14 10:07:35.812235] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:05.281 [2024-07-14 10:07:35.812509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84743 ] 00:15:05.549 [2024-07-14 10:07:35.952416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:05.549 [2024-07-14 10:07:36.013530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:06.493 10:07:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:06.493 10:07:36 -- common/autotest_common.sh@852 -- # return 0 00:15:06.493 10:07:36 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:06.493 10:07:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:06.493 10:07:36 -- common/autotest_common.sh@10 -- # set +x 00:15:06.493 NVMe0n1 00:15:06.493 10:07:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:06.493 10:07:36 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:06.493 Running I/O for 10 seconds... 00:15:16.472 00:15:16.472 Latency(us) 00:15:16.472 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.472 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:15:16.472 Verification LBA range: start 0x0 length 0x4000 00:15:16.472 NVMe0n1 : 10.05 17064.01 66.66 0.00 0.00 59824.99 11736.90 50045.67 00:15:16.472 =================================================================================================================== 00:15:16.472 Total : 17064.01 66.66 0.00 0.00 59824.99 11736.90 50045.67 00:15:16.472 0 00:15:16.731 10:07:47 -- target/queue_depth.sh@39 -- # killprocess 84743 00:15:16.731 10:07:47 -- common/autotest_common.sh@926 -- # '[' -z 84743 ']' 00:15:16.731 10:07:47 -- common/autotest_common.sh@930 -- # kill -0 84743 00:15:16.731 10:07:47 -- common/autotest_common.sh@931 -- # uname 00:15:16.731 10:07:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:16.731 10:07:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84743 00:15:16.731 killing process with pid 84743 00:15:16.731 Received shutdown signal, test time was about 10.000000 seconds 00:15:16.731 00:15:16.731 Latency(us) 00:15:16.731 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.731 =================================================================================================================== 00:15:16.731 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:16.731 10:07:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:16.731 10:07:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:16.731 10:07:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84743' 00:15:16.731 10:07:47 -- common/autotest_common.sh@945 -- # kill 84743 00:15:16.731 10:07:47 -- common/autotest_common.sh@950 -- # wait 84743 00:15:16.731 10:07:47 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:16.731 10:07:47 -- target/queue_depth.sh@43 -- # nvmftestfini 00:15:16.731 10:07:47 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:16.731 10:07:47 -- nvmf/common.sh@116 -- # sync 00:15:16.731 10:07:47 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:16.731 10:07:47 -- nvmf/common.sh@119 -- # set +e 00:15:16.731 10:07:47 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:16.731 10:07:47 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:16.990 rmmod nvme_tcp 00:15:16.990 rmmod nvme_fabrics 00:15:16.990 rmmod nvme_keyring 00:15:16.990 10:07:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:16.990 10:07:47 -- nvmf/common.sh@123 -- # set -e 00:15:16.990 10:07:47 -- nvmf/common.sh@124 -- # return 0 00:15:16.990 10:07:47 -- nvmf/common.sh@477 -- # '[' -n 84693 ']' 00:15:16.990 10:07:47 -- nvmf/common.sh@478 -- # killprocess 84693 00:15:16.990 10:07:47 -- common/autotest_common.sh@926 -- # '[' -z 84693 ']' 00:15:16.990 10:07:47 -- common/autotest_common.sh@930 -- # kill -0 84693 00:15:16.990 10:07:47 -- common/autotest_common.sh@931 -- # uname 00:15:16.990 10:07:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:16.990 10:07:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 84693 00:15:16.990 killing process with pid 84693 00:15:16.990 10:07:47 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:15:16.990 10:07:47 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:15:16.990 10:07:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 84693' 00:15:16.990 10:07:47 -- common/autotest_common.sh@945 -- # kill 84693 00:15:16.990 10:07:47 -- common/autotest_common.sh@950 -- # wait 84693 00:15:17.249 10:07:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:17.249 10:07:47 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:17.249 10:07:47 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:17.249 10:07:47 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:17.249 10:07:47 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:17.249 10:07:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.249 10:07:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:17.249 10:07:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.249 10:07:47 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:17.249 ************************************ 00:15:17.249 END TEST nvmf_queue_depth 00:15:17.249 ************************************ 00:15:17.249 00:15:17.249 real 0m13.438s 00:15:17.249 user 0m22.960s 00:15:17.249 sys 0m2.177s 00:15:17.249 10:07:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:17.249 10:07:47 -- common/autotest_common.sh@10 -- # set +x 00:15:17.249 10:07:47 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:17.249 10:07:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:17.249 10:07:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:17.249 10:07:47 -- common/autotest_common.sh@10 -- # set +x 00:15:17.249 ************************************ 00:15:17.249 START TEST nvmf_multipath 00:15:17.249 ************************************ 00:15:17.249 10:07:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:15:17.249 * Looking for test storage... 00:15:17.249 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:17.249 10:07:47 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:17.249 10:07:47 -- nvmf/common.sh@7 -- # uname -s 00:15:17.249 10:07:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.249 10:07:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.249 10:07:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.249 10:07:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.249 10:07:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.249 10:07:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.249 10:07:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.249 10:07:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.249 10:07:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.249 10:07:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.249 10:07:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:15:17.250 10:07:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:15:17.250 10:07:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.250 10:07:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.250 10:07:47 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:17.250 10:07:47 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:17.250 10:07:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.250 10:07:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.250 10:07:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.250 10:07:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.250 10:07:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.250 10:07:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.250 10:07:47 -- paths/export.sh@5 -- # export PATH 00:15:17.250 10:07:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.250 10:07:47 -- nvmf/common.sh@46 -- # : 0 00:15:17.250 10:07:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:17.250 10:07:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:17.250 10:07:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:17.250 10:07:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.250 10:07:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.250 10:07:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:17.250 10:07:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:17.250 10:07:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:17.250 10:07:47 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:17.250 10:07:47 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:17.250 10:07:47 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:15:17.250 10:07:47 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:17.250 10:07:47 -- target/multipath.sh@43 -- # nvmftestinit 00:15:17.250 10:07:47 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:17.250 10:07:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:17.250 10:07:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:17.250 10:07:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:17.250 10:07:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:17.250 10:07:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.250 10:07:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:17.250 10:07:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.250 10:07:47 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:17.250 10:07:47 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:17.250 10:07:47 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:17.250 10:07:47 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:17.250 10:07:47 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:17.250 10:07:47 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:17.250 10:07:47 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:17.250 10:07:47 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:17.250 10:07:47 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:17.250 10:07:47 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:17.250 10:07:47 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:17.250 10:07:47 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:17.250 10:07:47 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:17.250 10:07:47 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:17.250 10:07:47 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:17.250 10:07:47 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:17.250 10:07:47 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:17.250 10:07:47 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:17.250 10:07:47 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:17.250 10:07:47 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:17.250 Cannot find device "nvmf_tgt_br" 00:15:17.250 10:07:47 -- nvmf/common.sh@154 -- # true 00:15:17.250 10:07:47 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:17.250 Cannot find device "nvmf_tgt_br2" 00:15:17.250 10:07:47 -- nvmf/common.sh@155 -- # true 00:15:17.250 10:07:47 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:17.250 10:07:47 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:17.250 Cannot find device "nvmf_tgt_br" 00:15:17.509 10:07:47 -- nvmf/common.sh@157 -- # true 00:15:17.509 10:07:47 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:17.509 Cannot find device "nvmf_tgt_br2" 00:15:17.509 10:07:47 -- nvmf/common.sh@158 -- # true 00:15:17.509 10:07:47 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:17.509 10:07:47 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:17.509 10:07:47 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:17.509 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:17.509 10:07:47 -- nvmf/common.sh@161 -- # true 00:15:17.509 10:07:47 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:17.509 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:17.509 10:07:47 -- nvmf/common.sh@162 -- # true 00:15:17.509 10:07:47 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:17.509 10:07:47 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:17.509 10:07:47 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:17.509 10:07:47 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:17.509 10:07:47 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:17.509 10:07:47 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:17.509 10:07:47 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:17.509 10:07:47 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:17.509 10:07:47 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:17.509 10:07:47 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:17.509 10:07:47 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:17.509 10:07:47 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:17.509 10:07:47 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:17.509 10:07:47 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:17.509 10:07:47 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:17.509 10:07:47 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:17.509 10:07:47 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:17.509 10:07:47 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:17.509 10:07:48 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:17.509 10:07:48 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:17.509 10:07:48 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:17.509 10:07:48 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:17.509 10:07:48 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:17.509 10:07:48 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:17.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:17.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:15:17.768 00:15:17.768 --- 10.0.0.2 ping statistics --- 00:15:17.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.768 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:15:17.768 10:07:48 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:17.768 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:17.768 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:15:17.768 00:15:17.768 --- 10.0.0.3 ping statistics --- 00:15:17.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.768 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:15:17.768 10:07:48 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:17.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:17.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:15:17.768 00:15:17.768 --- 10.0.0.1 ping statistics --- 00:15:17.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:17.768 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:15:17.768 10:07:48 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:17.768 10:07:48 -- nvmf/common.sh@421 -- # return 0 00:15:17.768 10:07:48 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:17.768 10:07:48 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:17.768 10:07:48 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:17.768 10:07:48 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:17.768 10:07:48 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:17.768 10:07:48 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:17.768 10:07:48 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:17.768 10:07:48 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:15:17.768 10:07:48 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:15:17.768 10:07:48 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:15:17.768 10:07:48 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:17.768 10:07:48 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:17.768 10:07:48 -- common/autotest_common.sh@10 -- # set +x 00:15:17.768 10:07:48 -- nvmf/common.sh@469 -- # nvmfpid=85068 00:15:17.768 10:07:48 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:17.768 10:07:48 -- nvmf/common.sh@470 -- # waitforlisten 85068 00:15:17.768 10:07:48 -- common/autotest_common.sh@819 -- # '[' -z 85068 ']' 00:15:17.768 10:07:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:17.768 10:07:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:17.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:17.768 10:07:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:17.768 10:07:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:17.768 10:07:48 -- common/autotest_common.sh@10 -- # set +x 00:15:17.768 [2024-07-14 10:07:48.152801] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:17.768 [2024-07-14 10:07:48.152892] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:17.768 [2024-07-14 10:07:48.292452] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:18.026 [2024-07-14 10:07:48.357981] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:18.026 [2024-07-14 10:07:48.358165] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:18.026 [2024-07-14 10:07:48.358178] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:18.026 [2024-07-14 10:07:48.358186] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:18.026 [2024-07-14 10:07:48.358332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:18.026 [2024-07-14 10:07:48.358586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:18.026 [2024-07-14 10:07:48.358917] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:18.026 [2024-07-14 10:07:48.358959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:18.593 10:07:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:18.593 10:07:49 -- common/autotest_common.sh@852 -- # return 0 00:15:18.593 10:07:49 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:18.593 10:07:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:18.593 10:07:49 -- common/autotest_common.sh@10 -- # set +x 00:15:18.853 10:07:49 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:18.853 10:07:49 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:19.112 [2024-07-14 10:07:49.402687] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:19.112 10:07:49 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:19.370 Malloc0 00:15:19.370 10:07:49 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:15:19.370 10:07:49 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:19.629 10:07:50 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:19.888 [2024-07-14 10:07:50.329403] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.888 10:07:50 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:15:20.146 [2024-07-14 10:07:50.533516] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:15:20.146 10:07:50 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:15:20.404 10:07:50 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:15:20.663 10:07:50 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:15:20.663 10:07:50 -- common/autotest_common.sh@1177 -- # local i=0 00:15:20.663 10:07:50 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:15:20.663 10:07:50 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:15:20.663 10:07:50 -- common/autotest_common.sh@1184 -- # sleep 2 00:15:22.565 10:07:52 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:15:22.565 10:07:52 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:15:22.565 10:07:52 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:15:22.565 10:07:52 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:15:22.565 10:07:52 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:15:22.565 10:07:52 -- common/autotest_common.sh@1187 -- # return 0 00:15:22.565 10:07:52 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:15:22.565 10:07:52 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:15:22.565 10:07:52 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:15:22.565 10:07:52 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:15:22.565 10:07:52 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:15:22.565 10:07:52 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:15:22.565 10:07:52 -- target/multipath.sh@38 -- # return 0 00:15:22.565 10:07:52 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:15:22.565 10:07:52 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:15:22.565 10:07:52 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:15:22.565 10:07:52 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:15:22.565 10:07:52 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:15:22.565 10:07:52 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:15:22.565 10:07:52 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:15:22.565 10:07:52 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:22.565 10:07:52 -- target/multipath.sh@22 -- # local timeout=20 00:15:22.565 10:07:52 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:22.565 10:07:52 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:22.565 10:07:52 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:22.565 10:07:52 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:15:22.565 10:07:52 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:22.565 10:07:52 -- target/multipath.sh@22 -- # local timeout=20 00:15:22.565 10:07:52 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:22.565 10:07:52 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:22.565 10:07:52 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:22.566 10:07:52 -- target/multipath.sh@85 -- # echo numa 00:15:22.566 10:07:52 -- target/multipath.sh@88 -- # fio_pid=85206 00:15:22.566 10:07:52 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:22.566 10:07:52 -- target/multipath.sh@90 -- # sleep 1 00:15:22.566 [global] 00:15:22.566 thread=1 00:15:22.566 invalidate=1 00:15:22.566 rw=randrw 00:15:22.566 time_based=1 00:15:22.566 runtime=6 00:15:22.566 ioengine=libaio 00:15:22.566 direct=1 00:15:22.566 bs=4096 00:15:22.566 iodepth=128 00:15:22.566 norandommap=0 00:15:22.566 numjobs=1 00:15:22.566 00:15:22.566 verify_dump=1 00:15:22.566 verify_backlog=512 00:15:22.566 verify_state_save=0 00:15:22.566 do_verify=1 00:15:22.566 verify=crc32c-intel 00:15:22.566 [job0] 00:15:22.566 filename=/dev/nvme0n1 00:15:22.566 Could not set queue depth (nvme0n1) 00:15:22.825 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:22.825 fio-3.35 00:15:22.825 Starting 1 thread 00:15:23.758 10:07:53 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:23.758 10:07:54 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:24.017 10:07:54 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:15:24.017 10:07:54 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:24.017 10:07:54 -- target/multipath.sh@22 -- # local timeout=20 00:15:24.017 10:07:54 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:24.017 10:07:54 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:24.017 10:07:54 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:24.017 10:07:54 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:15:24.017 10:07:54 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:24.017 10:07:54 -- target/multipath.sh@22 -- # local timeout=20 00:15:24.017 10:07:54 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:24.017 10:07:54 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:24.017 10:07:54 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:24.017 10:07:54 -- target/multipath.sh@25 -- # sleep 1s 00:15:24.953 10:07:55 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:24.953 10:07:55 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:24.953 10:07:55 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:24.953 10:07:55 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:25.211 10:07:55 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:25.469 10:07:56 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:15:25.469 10:07:56 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:25.469 10:07:56 -- target/multipath.sh@22 -- # local timeout=20 00:15:25.470 10:07:56 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:25.470 10:07:56 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:25.470 10:07:56 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:25.470 10:07:56 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:15:25.470 10:07:56 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:25.470 10:07:56 -- target/multipath.sh@22 -- # local timeout=20 00:15:25.470 10:07:56 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:25.470 10:07:56 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:25.470 10:07:56 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:25.470 10:07:56 -- target/multipath.sh@25 -- # sleep 1s 00:15:26.845 10:07:57 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:26.845 10:07:57 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:26.845 10:07:57 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:26.845 10:07:57 -- target/multipath.sh@104 -- # wait 85206 00:15:29.375 00:15:29.375 job0: (groupid=0, jobs=1): err= 0: pid=85227: Sun Jul 14 10:07:59 2024 00:15:29.375 read: IOPS=12.6k, BW=49.2MiB/s (51.6MB/s)(296MiB/6005msec) 00:15:29.375 slat (usec): min=4, max=5051, avg=45.55, stdev=203.96 00:15:29.375 clat (usec): min=428, max=12755, avg=6990.00, stdev=1066.90 00:15:29.375 lat (usec): min=480, max=12765, avg=7035.55, stdev=1074.73 00:15:29.375 clat percentiles (usec): 00:15:29.375 | 1.00th=[ 4228], 5.00th=[ 5473], 10.00th=[ 5932], 20.00th=[ 6259], 00:15:29.375 | 30.00th=[ 6390], 40.00th=[ 6587], 50.00th=[ 6915], 60.00th=[ 7177], 00:15:29.375 | 70.00th=[ 7504], 80.00th=[ 7767], 90.00th=[ 8160], 95.00th=[ 8717], 00:15:29.375 | 99.00th=[10159], 99.50th=[10683], 99.90th=[11600], 99.95th=[11863], 00:15:29.375 | 99.99th=[12518] 00:15:29.375 bw ( KiB/s): min=12928, max=32776, per=51.49%, avg=25954.36, stdev=6454.64, samples=11 00:15:29.375 iops : min= 3232, max= 8194, avg=6488.55, stdev=1613.62, samples=11 00:15:29.375 write: IOPS=7233, BW=28.3MiB/s (29.6MB/s)(149MiB/5259msec); 0 zone resets 00:15:29.375 slat (usec): min=14, max=2139, avg=57.56, stdev=139.72 00:15:29.375 clat (usec): min=402, max=14442, avg=6110.00, stdev=904.29 00:15:29.375 lat (usec): min=472, max=14652, avg=6167.56, stdev=907.03 00:15:29.375 clat percentiles (usec): 00:15:29.375 | 1.00th=[ 3458], 5.00th=[ 4359], 10.00th=[ 5145], 20.00th=[ 5604], 00:15:29.375 | 30.00th=[ 5800], 40.00th=[ 5997], 50.00th=[ 6194], 60.00th=[ 6325], 00:15:29.375 | 70.00th=[ 6521], 80.00th=[ 6718], 90.00th=[ 6980], 95.00th=[ 7242], 00:15:29.375 | 99.00th=[ 8979], 99.50th=[ 9634], 99.90th=[10683], 99.95th=[10814], 00:15:29.375 | 99.99th=[11338] 00:15:29.376 bw ( KiB/s): min=13288, max=32048, per=89.71%, avg=25958.55, stdev=6251.42, samples=11 00:15:29.376 iops : min= 3322, max= 8012, avg=6489.64, stdev=1562.86, samples=11 00:15:29.376 lat (usec) : 500=0.01% 00:15:29.376 lat (msec) : 2=0.01%, 4=1.52%, 10=97.40%, 20=1.06% 00:15:29.376 cpu : usr=5.95%, sys=24.35%, ctx=7052, majf=0, minf=108 00:15:29.376 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:29.376 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:29.376 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:29.376 issued rwts: total=75670,38043,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:29.376 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:29.376 00:15:29.376 Run status group 0 (all jobs): 00:15:29.376 READ: bw=49.2MiB/s (51.6MB/s), 49.2MiB/s-49.2MiB/s (51.6MB/s-51.6MB/s), io=296MiB (310MB), run=6005-6005msec 00:15:29.376 WRITE: bw=28.3MiB/s (29.6MB/s), 28.3MiB/s-28.3MiB/s (29.6MB/s-29.6MB/s), io=149MiB (156MB), run=5259-5259msec 00:15:29.376 00:15:29.376 Disk stats (read/write): 00:15:29.376 nvme0n1: ios=74319/37613, merge=0/0, ticks=485313/213630, in_queue=698943, util=98.62% 00:15:29.376 10:07:59 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:15:29.376 10:07:59 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:15:29.376 10:07:59 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:15:29.376 10:07:59 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:15:29.376 10:07:59 -- target/multipath.sh@22 -- # local timeout=20 00:15:29.376 10:07:59 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:29.376 10:07:59 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:29.376 10:07:59 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:29.376 10:07:59 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:15:29.376 10:07:59 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:15:29.376 10:07:59 -- target/multipath.sh@22 -- # local timeout=20 00:15:29.376 10:07:59 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:29.376 10:07:59 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:29.376 10:07:59 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:15:29.376 10:07:59 -- target/multipath.sh@25 -- # sleep 1s 00:15:30.323 10:08:00 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:30.323 10:08:00 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:30.323 10:08:00 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:15:30.323 10:08:00 -- target/multipath.sh@113 -- # echo round-robin 00:15:30.323 10:08:00 -- target/multipath.sh@116 -- # fio_pid=85354 00:15:30.323 10:08:00 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:15:30.323 10:08:00 -- target/multipath.sh@118 -- # sleep 1 00:15:30.323 [global] 00:15:30.323 thread=1 00:15:30.323 invalidate=1 00:15:30.323 rw=randrw 00:15:30.323 time_based=1 00:15:30.323 runtime=6 00:15:30.323 ioengine=libaio 00:15:30.323 direct=1 00:15:30.323 bs=4096 00:15:30.323 iodepth=128 00:15:30.323 norandommap=0 00:15:30.323 numjobs=1 00:15:30.323 00:15:30.323 verify_dump=1 00:15:30.323 verify_backlog=512 00:15:30.323 verify_state_save=0 00:15:30.323 do_verify=1 00:15:30.323 verify=crc32c-intel 00:15:30.323 [job0] 00:15:30.323 filename=/dev/nvme0n1 00:15:30.592 Could not set queue depth (nvme0n1) 00:15:30.592 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:15:30.592 fio-3.35 00:15:30.592 Starting 1 thread 00:15:31.527 10:08:01 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:15:31.784 10:08:02 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:15:32.043 10:08:02 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:15:32.043 10:08:02 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:15:32.043 10:08:02 -- target/multipath.sh@22 -- # local timeout=20 00:15:32.043 10:08:02 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:32.043 10:08:02 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:32.043 10:08:02 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:32.043 10:08:02 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:15:32.043 10:08:02 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:15:32.043 10:08:02 -- target/multipath.sh@22 -- # local timeout=20 00:15:32.043 10:08:02 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:32.043 10:08:02 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:32.043 10:08:02 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:32.043 10:08:02 -- target/multipath.sh@25 -- # sleep 1s 00:15:32.977 10:08:03 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:32.977 10:08:03 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:32.977 10:08:03 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:32.977 10:08:03 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:15:33.235 10:08:03 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:15:33.492 10:08:03 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:15:33.492 10:08:03 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:15:33.492 10:08:03 -- target/multipath.sh@22 -- # local timeout=20 00:15:33.492 10:08:03 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:15:33.492 10:08:03 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:15:33.492 10:08:03 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:15:33.492 10:08:03 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:15:33.492 10:08:03 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:15:33.492 10:08:03 -- target/multipath.sh@22 -- # local timeout=20 00:15:33.492 10:08:03 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:15:33.492 10:08:03 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:33.492 10:08:03 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:33.492 10:08:03 -- target/multipath.sh@25 -- # sleep 1s 00:15:34.425 10:08:04 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:15:34.425 10:08:04 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:15:34.425 10:08:04 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:15:34.425 10:08:04 -- target/multipath.sh@132 -- # wait 85354 00:15:36.958 00:15:36.958 job0: (groupid=0, jobs=1): err= 0: pid=85375: Sun Jul 14 10:08:07 2024 00:15:36.958 read: IOPS=13.7k, BW=53.5MiB/s (56.1MB/s)(321MiB/6005msec) 00:15:36.958 slat (usec): min=2, max=6268, avg=37.22, stdev=182.95 00:15:36.958 clat (usec): min=643, max=14822, avg=6473.59, stdev=1428.29 00:15:36.958 lat (usec): min=660, max=14850, avg=6510.81, stdev=1440.82 00:15:36.958 clat percentiles (usec): 00:15:36.958 | 1.00th=[ 2835], 5.00th=[ 3982], 10.00th=[ 4555], 20.00th=[ 5407], 00:15:36.958 | 30.00th=[ 5932], 40.00th=[ 6259], 50.00th=[ 6521], 60.00th=[ 6849], 00:15:36.958 | 70.00th=[ 7177], 80.00th=[ 7570], 90.00th=[ 8029], 95.00th=[ 8586], 00:15:36.958 | 99.00th=[10290], 99.50th=[10814], 99.90th=[11731], 99.95th=[12256], 00:15:36.958 | 99.99th=[12911] 00:15:36.958 bw ( KiB/s): min=10944, max=49064, per=51.17%, avg=28034.18, stdev=12022.13, samples=11 00:15:36.958 iops : min= 2736, max=12266, avg=7008.55, stdev=3005.53, samples=11 00:15:36.958 write: IOPS=8219, BW=32.1MiB/s (33.7MB/s)(167MiB/5199msec); 0 zone resets 00:15:36.958 slat (usec): min=3, max=4908, avg=49.47, stdev=126.18 00:15:36.958 clat (usec): min=297, max=11771, avg=5371.13, stdev=1419.31 00:15:36.958 lat (usec): min=476, max=11796, avg=5420.60, stdev=1430.47 00:15:36.958 clat percentiles (usec): 00:15:36.958 | 1.00th=[ 2278], 5.00th=[ 2868], 10.00th=[ 3261], 20.00th=[ 3884], 00:15:36.958 | 30.00th=[ 4686], 40.00th=[ 5407], 50.00th=[ 5735], 60.00th=[ 5997], 00:15:36.958 | 70.00th=[ 6194], 80.00th=[ 6456], 90.00th=[ 6849], 95.00th=[ 7177], 00:15:36.958 | 99.00th=[ 8848], 99.50th=[ 9241], 99.90th=[10814], 99.95th=[10945], 00:15:36.958 | 99.99th=[11338] 00:15:36.958 bw ( KiB/s): min=11536, max=48008, per=85.33%, avg=28056.73, stdev=11582.25, samples=11 00:15:36.958 iops : min= 2884, max=12002, avg=7014.18, stdev=2895.56, samples=11 00:15:36.958 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.02% 00:15:36.958 lat (msec) : 2=0.25%, 4=10.52%, 10=88.14%, 20=1.06% 00:15:36.958 cpu : usr=6.28%, sys=25.70%, ctx=8538, majf=0, minf=108 00:15:36.958 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:15:36.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:36.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:36.958 issued rwts: total=82252,42733,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:36.958 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:36.958 00:15:36.958 Run status group 0 (all jobs): 00:15:36.958 READ: bw=53.5MiB/s (56.1MB/s), 53.5MiB/s-53.5MiB/s (56.1MB/s-56.1MB/s), io=321MiB (337MB), run=6005-6005msec 00:15:36.958 WRITE: bw=32.1MiB/s (33.7MB/s), 32.1MiB/s-32.1MiB/s (33.7MB/s-33.7MB/s), io=167MiB (175MB), run=5199-5199msec 00:15:36.958 00:15:36.958 Disk stats (read/write): 00:15:36.958 nvme0n1: ios=81256/41950, merge=0/0, ticks=484297/204128, in_queue=688425, util=98.62% 00:15:36.958 10:08:07 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:36.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:15:36.958 10:08:07 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:36.958 10:08:07 -- common/autotest_common.sh@1198 -- # local i=0 00:15:36.958 10:08:07 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:15:36.958 10:08:07 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:36.958 10:08:07 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:15:36.958 10:08:07 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:36.958 10:08:07 -- common/autotest_common.sh@1210 -- # return 0 00:15:36.958 10:08:07 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:36.958 10:08:07 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:15:36.958 10:08:07 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:15:36.958 10:08:07 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:15:36.958 10:08:07 -- target/multipath.sh@144 -- # nvmftestfini 00:15:36.958 10:08:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:36.958 10:08:07 -- nvmf/common.sh@116 -- # sync 00:15:37.218 10:08:07 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:37.218 10:08:07 -- nvmf/common.sh@119 -- # set +e 00:15:37.218 10:08:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:37.218 10:08:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:37.218 rmmod nvme_tcp 00:15:37.218 rmmod nvme_fabrics 00:15:37.218 rmmod nvme_keyring 00:15:37.218 10:08:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:37.218 10:08:07 -- nvmf/common.sh@123 -- # set -e 00:15:37.218 10:08:07 -- nvmf/common.sh@124 -- # return 0 00:15:37.218 10:08:07 -- nvmf/common.sh@477 -- # '[' -n 85068 ']' 00:15:37.218 10:08:07 -- nvmf/common.sh@478 -- # killprocess 85068 00:15:37.218 10:08:07 -- common/autotest_common.sh@926 -- # '[' -z 85068 ']' 00:15:37.218 10:08:07 -- common/autotest_common.sh@930 -- # kill -0 85068 00:15:37.218 10:08:07 -- common/autotest_common.sh@931 -- # uname 00:15:37.218 10:08:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:37.218 10:08:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85068 00:15:37.218 10:08:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:37.218 10:08:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:37.218 killing process with pid 85068 00:15:37.218 10:08:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85068' 00:15:37.218 10:08:07 -- common/autotest_common.sh@945 -- # kill 85068 00:15:37.218 10:08:07 -- common/autotest_common.sh@950 -- # wait 85068 00:15:37.478 10:08:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:37.478 10:08:07 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:37.478 10:08:07 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:37.478 10:08:07 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:37.478 10:08:07 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:37.478 10:08:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.478 10:08:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:37.478 10:08:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.478 10:08:07 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:37.478 00:15:37.478 real 0m20.205s 00:15:37.478 user 1m18.586s 00:15:37.478 sys 0m7.205s 00:15:37.478 10:08:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:37.478 10:08:07 -- common/autotest_common.sh@10 -- # set +x 00:15:37.478 ************************************ 00:15:37.478 END TEST nvmf_multipath 00:15:37.478 ************************************ 00:15:37.478 10:08:07 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:37.478 10:08:07 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:37.478 10:08:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:37.478 10:08:07 -- common/autotest_common.sh@10 -- # set +x 00:15:37.478 ************************************ 00:15:37.478 START TEST nvmf_zcopy 00:15:37.478 ************************************ 00:15:37.478 10:08:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:15:37.478 * Looking for test storage... 00:15:37.478 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:15:37.478 10:08:07 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:37.478 10:08:07 -- nvmf/common.sh@7 -- # uname -s 00:15:37.478 10:08:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:37.478 10:08:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:37.478 10:08:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:37.478 10:08:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:37.478 10:08:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:37.478 10:08:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:37.478 10:08:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:37.478 10:08:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:37.478 10:08:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:37.478 10:08:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:37.478 10:08:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:15:37.478 10:08:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:15:37.478 10:08:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:37.478 10:08:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:37.478 10:08:07 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:37.478 10:08:07 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:37.478 10:08:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:37.478 10:08:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:37.478 10:08:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:37.478 10:08:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.478 10:08:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.478 10:08:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.478 10:08:07 -- paths/export.sh@5 -- # export PATH 00:15:37.478 10:08:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:37.478 10:08:07 -- nvmf/common.sh@46 -- # : 0 00:15:37.478 10:08:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:37.478 10:08:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:37.478 10:08:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:37.478 10:08:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:37.478 10:08:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:37.478 10:08:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:37.478 10:08:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:37.478 10:08:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:37.478 10:08:07 -- target/zcopy.sh@12 -- # nvmftestinit 00:15:37.478 10:08:07 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:37.478 10:08:07 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:37.478 10:08:07 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:37.478 10:08:07 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:37.478 10:08:07 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:37.478 10:08:07 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:37.478 10:08:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:37.478 10:08:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:37.478 10:08:07 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:37.478 10:08:07 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:37.478 10:08:07 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:37.478 10:08:07 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:37.478 10:08:07 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:37.478 10:08:07 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:37.478 10:08:07 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:37.478 10:08:07 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:37.478 10:08:07 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:37.478 10:08:07 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:37.478 10:08:07 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:37.478 10:08:07 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:37.478 10:08:07 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:37.478 10:08:07 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:37.478 10:08:07 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:37.478 10:08:07 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:37.478 10:08:07 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:37.478 10:08:07 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:37.478 10:08:07 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:37.478 10:08:07 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:37.478 Cannot find device "nvmf_tgt_br" 00:15:37.478 10:08:08 -- nvmf/common.sh@154 -- # true 00:15:37.478 10:08:08 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:37.478 Cannot find device "nvmf_tgt_br2" 00:15:37.478 10:08:08 -- nvmf/common.sh@155 -- # true 00:15:37.478 10:08:08 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:37.737 10:08:08 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:37.737 Cannot find device "nvmf_tgt_br" 00:15:37.737 10:08:08 -- nvmf/common.sh@157 -- # true 00:15:37.737 10:08:08 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:37.737 Cannot find device "nvmf_tgt_br2" 00:15:37.737 10:08:08 -- nvmf/common.sh@158 -- # true 00:15:37.737 10:08:08 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:37.737 10:08:08 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:37.737 10:08:08 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:37.737 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:37.737 10:08:08 -- nvmf/common.sh@161 -- # true 00:15:37.737 10:08:08 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:37.737 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:37.737 10:08:08 -- nvmf/common.sh@162 -- # true 00:15:37.737 10:08:08 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:37.737 10:08:08 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:37.737 10:08:08 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:37.737 10:08:08 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:37.737 10:08:08 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:37.737 10:08:08 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:37.737 10:08:08 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:37.737 10:08:08 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:37.737 10:08:08 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:37.737 10:08:08 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:37.737 10:08:08 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:37.737 10:08:08 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:37.737 10:08:08 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:37.737 10:08:08 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:37.737 10:08:08 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:37.737 10:08:08 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:37.737 10:08:08 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:37.737 10:08:08 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:37.737 10:08:08 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:37.737 10:08:08 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:37.737 10:08:08 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:37.995 10:08:08 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:37.995 10:08:08 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:37.995 10:08:08 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:37.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:37.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:15:37.995 00:15:37.995 --- 10.0.0.2 ping statistics --- 00:15:37.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.995 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:15:37.995 10:08:08 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:37.995 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:37.995 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:15:37.995 00:15:37.995 --- 10.0.0.3 ping statistics --- 00:15:37.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.995 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:15:37.995 10:08:08 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:37.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:37.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:15:37.995 00:15:37.995 --- 10.0.0.1 ping statistics --- 00:15:37.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:37.995 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:15:37.995 10:08:08 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:37.995 10:08:08 -- nvmf/common.sh@421 -- # return 0 00:15:37.995 10:08:08 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:37.995 10:08:08 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:37.995 10:08:08 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:37.995 10:08:08 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:37.995 10:08:08 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:37.995 10:08:08 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:37.995 10:08:08 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:37.995 10:08:08 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:15:37.995 10:08:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:37.995 10:08:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:37.995 10:08:08 -- common/autotest_common.sh@10 -- # set +x 00:15:37.995 10:08:08 -- nvmf/common.sh@469 -- # nvmfpid=85659 00:15:37.996 10:08:08 -- nvmf/common.sh@470 -- # waitforlisten 85659 00:15:37.996 10:08:08 -- common/autotest_common.sh@819 -- # '[' -z 85659 ']' 00:15:37.996 10:08:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:37.996 10:08:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:37.996 10:08:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:37.996 10:08:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:37.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:37.996 10:08:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:37.996 10:08:08 -- common/autotest_common.sh@10 -- # set +x 00:15:37.996 [2024-07-14 10:08:08.392113] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:37.996 [2024-07-14 10:08:08.392196] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:37.996 [2024-07-14 10:08:08.531658] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:38.254 [2024-07-14 10:08:08.587165] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:38.254 [2024-07-14 10:08:08.587333] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:38.254 [2024-07-14 10:08:08.587345] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:38.254 [2024-07-14 10:08:08.587354] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:38.254 [2024-07-14 10:08:08.587383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:38.819 10:08:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:38.819 10:08:09 -- common/autotest_common.sh@852 -- # return 0 00:15:38.819 10:08:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:38.819 10:08:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:38.819 10:08:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.077 10:08:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:39.077 10:08:09 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:15:39.077 10:08:09 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:15:39.077 10:08:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:39.077 10:08:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.077 [2024-07-14 10:08:09.410851] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:39.077 10:08:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:39.077 10:08:09 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:39.077 10:08:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:39.077 10:08:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.077 10:08:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:39.077 10:08:09 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:39.077 10:08:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:39.077 10:08:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.077 [2024-07-14 10:08:09.427028] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:39.077 10:08:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:39.077 10:08:09 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:39.077 10:08:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:39.077 10:08:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.077 10:08:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:39.077 10:08:09 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:15:39.077 10:08:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:39.077 10:08:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.077 malloc0 00:15:39.077 10:08:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:39.077 10:08:09 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:15:39.077 10:08:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:39.077 10:08:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.077 10:08:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:39.077 10:08:09 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:15:39.077 10:08:09 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:15:39.077 10:08:09 -- nvmf/common.sh@520 -- # config=() 00:15:39.077 10:08:09 -- nvmf/common.sh@520 -- # local subsystem config 00:15:39.077 10:08:09 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:39.077 10:08:09 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:39.077 { 00:15:39.077 "params": { 00:15:39.077 "name": "Nvme$subsystem", 00:15:39.077 "trtype": "$TEST_TRANSPORT", 00:15:39.077 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:39.077 "adrfam": "ipv4", 00:15:39.077 "trsvcid": "$NVMF_PORT", 00:15:39.077 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:39.077 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:39.077 "hdgst": ${hdgst:-false}, 00:15:39.077 "ddgst": ${ddgst:-false} 00:15:39.077 }, 00:15:39.077 "method": "bdev_nvme_attach_controller" 00:15:39.077 } 00:15:39.077 EOF 00:15:39.077 )") 00:15:39.077 10:08:09 -- nvmf/common.sh@542 -- # cat 00:15:39.077 10:08:09 -- nvmf/common.sh@544 -- # jq . 00:15:39.077 10:08:09 -- nvmf/common.sh@545 -- # IFS=, 00:15:39.077 10:08:09 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:39.077 "params": { 00:15:39.077 "name": "Nvme1", 00:15:39.077 "trtype": "tcp", 00:15:39.077 "traddr": "10.0.0.2", 00:15:39.077 "adrfam": "ipv4", 00:15:39.077 "trsvcid": "4420", 00:15:39.077 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:39.077 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:39.077 "hdgst": false, 00:15:39.077 "ddgst": false 00:15:39.077 }, 00:15:39.077 "method": "bdev_nvme_attach_controller" 00:15:39.077 }' 00:15:39.078 [2024-07-14 10:08:09.519953] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:39.078 [2024-07-14 10:08:09.520048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85710 ] 00:15:39.335 [2024-07-14 10:08:09.661623] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.335 [2024-07-14 10:08:09.724709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.593 Running I/O for 10 seconds... 00:15:49.570 00:15:49.570 Latency(us) 00:15:49.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:49.570 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:15:49.570 Verification LBA range: start 0x0 length 0x1000 00:15:49.570 Nvme1n1 : 10.01 11242.93 87.84 0.00 0.00 11356.53 1273.48 19422.49 00:15:49.570 =================================================================================================================== 00:15:49.570 Total : 11242.93 87.84 0.00 0.00 11356.53 1273.48 19422.49 00:15:49.570 10:08:20 -- target/zcopy.sh@39 -- # perfpid=85822 00:15:49.570 10:08:20 -- target/zcopy.sh@41 -- # xtrace_disable 00:15:49.570 10:08:20 -- common/autotest_common.sh@10 -- # set +x 00:15:49.570 10:08:20 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:15:49.570 10:08:20 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:15:49.570 10:08:20 -- nvmf/common.sh@520 -- # config=() 00:15:49.570 10:08:20 -- nvmf/common.sh@520 -- # local subsystem config 00:15:49.570 10:08:20 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:15:49.570 10:08:20 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:15:49.570 { 00:15:49.570 "params": { 00:15:49.570 "name": "Nvme$subsystem", 00:15:49.570 "trtype": "$TEST_TRANSPORT", 00:15:49.570 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:49.570 "adrfam": "ipv4", 00:15:49.570 "trsvcid": "$NVMF_PORT", 00:15:49.570 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:49.570 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:49.570 "hdgst": ${hdgst:-false}, 00:15:49.570 "ddgst": ${ddgst:-false} 00:15:49.570 }, 00:15:49.570 "method": "bdev_nvme_attach_controller" 00:15:49.570 } 00:15:49.570 EOF 00:15:49.570 )") 00:15:49.570 10:08:20 -- nvmf/common.sh@542 -- # cat 00:15:49.570 [2024-07-14 10:08:20.106428] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.570 [2024-07-14 10:08:20.106468] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.570 10:08:20 -- nvmf/common.sh@544 -- # jq . 00:15:49.570 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.570 10:08:20 -- nvmf/common.sh@545 -- # IFS=, 00:15:49.570 10:08:20 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:15:49.570 "params": { 00:15:49.570 "name": "Nvme1", 00:15:49.570 "trtype": "tcp", 00:15:49.570 "traddr": "10.0.0.2", 00:15:49.570 "adrfam": "ipv4", 00:15:49.570 "trsvcid": "4420", 00:15:49.570 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:49.570 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:49.570 "hdgst": false, 00:15:49.570 "ddgst": false 00:15:49.570 }, 00:15:49.570 "method": "bdev_nvme_attach_controller" 00:15:49.570 }' 00:15:49.828 [2024-07-14 10:08:20.118386] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.828 [2024-07-14 10:08:20.118418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.828 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.828 [2024-07-14 10:08:20.126381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.828 [2024-07-14 10:08:20.126407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.829 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.829 [2024-07-14 10:08:20.138409] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.829 [2024-07-14 10:08:20.138438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.829 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.829 [2024-07-14 10:08:20.150400] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.829 [2024-07-14 10:08:20.150424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.829 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.829 [2024-07-14 10:08:20.158146] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:15:49.829 [2024-07-14 10:08:20.158248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85822 ] 00:15:49.829 [2024-07-14 10:08:20.162410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.829 [2024-07-14 10:08:20.162435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.829 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.829 [2024-07-14 10:08:20.174392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.829 [2024-07-14 10:08:20.174417] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.829 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.829 [2024-07-14 10:08:20.186415] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.829 [2024-07-14 10:08:20.186441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.829 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.829 [2024-07-14 10:08:20.198415] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.829 [2024-07-14 10:08:20.198439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.829 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.829 [2024-07-14 10:08:20.210417] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.829 [2024-07-14 10:08:20.210441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.829 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.829 [2024-07-14 10:08:20.222418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.829 [2024-07-14 10:08:20.222444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.829 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.829 [2024-07-14 10:08:20.234421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.829 [2024-07-14 10:08:20.234445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.829 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.829 [2024-07-14 10:08:20.246425] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.829 [2024-07-14 10:08:20.246450] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.829 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.829 [2024-07-14 10:08:20.258429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.829 [2024-07-14 10:08:20.258453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.829 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.829 [2024-07-14 10:08:20.270432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.829 [2024-07-14 10:08:20.270456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.829 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.829 [2024-07-14 10:08:20.282435] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.829 [2024-07-14 10:08:20.282459] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.829 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.829 [2024-07-14 10:08:20.294436] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.829 [2024-07-14 10:08:20.294464] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.829 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.829 [2024-07-14 10:08:20.298882] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.829 [2024-07-14 10:08:20.306445] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.829 [2024-07-14 10:08:20.306471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.829 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.829 [2024-07-14 10:08:20.318444] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.829 [2024-07-14 10:08:20.318469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.829 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.829 [2024-07-14 10:08:20.330453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.829 [2024-07-14 10:08:20.330482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.829 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.829 [2024-07-14 10:08:20.342475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.829 [2024-07-14 10:08:20.342505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.829 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.829 [2024-07-14 10:08:20.354451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.829 [2024-07-14 10:08:20.354475] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.829 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:49.829 [2024-07-14 10:08:20.359341] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.829 [2024-07-14 10:08:20.366456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:49.829 [2024-07-14 10:08:20.366480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:49.829 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.088 [2024-07-14 10:08:20.378467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.088 [2024-07-14 10:08:20.378496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.088 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.088 [2024-07-14 10:08:20.390473] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.088 [2024-07-14 10:08:20.390501] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.088 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.088 [2024-07-14 10:08:20.402491] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.088 [2024-07-14 10:08:20.402519] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.088 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.088 [2024-07-14 10:08:20.414471] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.088 [2024-07-14 10:08:20.414498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.088 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.088 [2024-07-14 10:08:20.426469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.088 [2024-07-14 10:08:20.426502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.088 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.088 [2024-07-14 10:08:20.438476] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.088 [2024-07-14 10:08:20.438504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.088 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.088 [2024-07-14 10:08:20.450483] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.088 [2024-07-14 10:08:20.450508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.088 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.088 [2024-07-14 10:08:20.462468] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.088 [2024-07-14 10:08:20.462492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.088 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.088 [2024-07-14 10:08:20.474525] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.088 [2024-07-14 10:08:20.474555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.088 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.088 [2024-07-14 10:08:20.486517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.088 [2024-07-14 10:08:20.486543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.088 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.088 [2024-07-14 10:08:20.498529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.088 [2024-07-14 10:08:20.498556] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.088 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.088 [2024-07-14 10:08:20.510553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.088 [2024-07-14 10:08:20.510580] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.089 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.089 [2024-07-14 10:08:20.522549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.089 [2024-07-14 10:08:20.522574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.089 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.089 Running I/O for 5 seconds... 00:15:50.089 [2024-07-14 10:08:20.534559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.089 [2024-07-14 10:08:20.534582] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.089 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.089 [2024-07-14 10:08:20.551531] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.089 [2024-07-14 10:08:20.551564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.089 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.089 [2024-07-14 10:08:20.568087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.089 [2024-07-14 10:08:20.568120] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.089 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.089 [2024-07-14 10:08:20.584981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.089 [2024-07-14 10:08:20.585015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.089 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.089 [2024-07-14 10:08:20.601463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.089 [2024-07-14 10:08:20.601496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.089 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.089 [2024-07-14 10:08:20.617502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.089 [2024-07-14 10:08:20.617535] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.089 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.347 [2024-07-14 10:08:20.633980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.347 [2024-07-14 10:08:20.634013] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.347 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.347 [2024-07-14 10:08:20.651447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.347 [2024-07-14 10:08:20.651480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.347 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.348 [2024-07-14 10:08:20.667082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.348 [2024-07-14 10:08:20.667115] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.348 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.348 [2024-07-14 10:08:20.683705] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.348 [2024-07-14 10:08:20.683737] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.348 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.348 [2024-07-14 10:08:20.699682] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.348 [2024-07-14 10:08:20.699714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.348 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.348 [2024-07-14 10:08:20.710806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.348 [2024-07-14 10:08:20.710840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.348 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.348 [2024-07-14 10:08:20.726325] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.348 [2024-07-14 10:08:20.726358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.348 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.348 [2024-07-14 10:08:20.742615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.348 [2024-07-14 10:08:20.742675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.348 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.348 [2024-07-14 10:08:20.759384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.348 [2024-07-14 10:08:20.759416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.348 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.348 [2024-07-14 10:08:20.775800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.348 [2024-07-14 10:08:20.775833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.348 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.348 [2024-07-14 10:08:20.793158] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.348 [2024-07-14 10:08:20.793192] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.348 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.348 [2024-07-14 10:08:20.808427] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.348 [2024-07-14 10:08:20.808462] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.348 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.348 [2024-07-14 10:08:20.819827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.348 [2024-07-14 10:08:20.819860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.348 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.348 [2024-07-14 10:08:20.836241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.348 [2024-07-14 10:08:20.836273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.348 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.348 [2024-07-14 10:08:20.852975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.348 [2024-07-14 10:08:20.853009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.348 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.348 [2024-07-14 10:08:20.869539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.348 [2024-07-14 10:08:20.869573] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.348 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.348 [2024-07-14 10:08:20.886132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.348 [2024-07-14 10:08:20.886165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.348 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.606 [2024-07-14 10:08:20.902938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.606 [2024-07-14 10:08:20.903003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.606 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.606 [2024-07-14 10:08:20.919283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.606 [2024-07-14 10:08:20.919316] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.606 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.606 [2024-07-14 10:08:20.936058] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.606 [2024-07-14 10:08:20.936090] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.606 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.606 [2024-07-14 10:08:20.951986] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.606 [2024-07-14 10:08:20.952020] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.606 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.606 [2024-07-14 10:08:20.969114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.606 [2024-07-14 10:08:20.969147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.606 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.606 [2024-07-14 10:08:20.985298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.606 [2024-07-14 10:08:20.985329] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.606 2024/07/14 10:08:20 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.606 [2024-07-14 10:08:21.001750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.606 [2024-07-14 10:08:21.001783] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.606 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.606 [2024-07-14 10:08:21.018025] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.607 [2024-07-14 10:08:21.018057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.607 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.607 [2024-07-14 10:08:21.029672] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.607 [2024-07-14 10:08:21.029703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.607 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.607 [2024-07-14 10:08:21.045225] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.607 [2024-07-14 10:08:21.045256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.607 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.607 [2024-07-14 10:08:21.062299] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.607 [2024-07-14 10:08:21.062332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.607 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.607 [2024-07-14 10:08:21.077686] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.607 [2024-07-14 10:08:21.077719] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.607 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.607 [2024-07-14 10:08:21.088517] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.607 [2024-07-14 10:08:21.088550] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.607 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.607 [2024-07-14 10:08:21.104850] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.607 [2024-07-14 10:08:21.104883] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.607 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.607 [2024-07-14 10:08:21.120221] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.607 [2024-07-14 10:08:21.120253] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.607 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.607 [2024-07-14 10:08:21.132033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.607 [2024-07-14 10:08:21.132065] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.607 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.607 [2024-07-14 10:08:21.147643] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.607 [2024-07-14 10:08:21.147675] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.607 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.865 [2024-07-14 10:08:21.163599] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.865 [2024-07-14 10:08:21.163632] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.865 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.865 [2024-07-14 10:08:21.180669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.865 [2024-07-14 10:08:21.180701] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.865 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.865 [2024-07-14 10:08:21.196734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.865 [2024-07-14 10:08:21.196767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.865 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.865 [2024-07-14 10:08:21.213505] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.865 [2024-07-14 10:08:21.213539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.865 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.865 [2024-07-14 10:08:21.230268] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.865 [2024-07-14 10:08:21.230332] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.865 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.865 [2024-07-14 10:08:21.247867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.865 [2024-07-14 10:08:21.247900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.865 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.865 [2024-07-14 10:08:21.264180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.865 [2024-07-14 10:08:21.264240] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.865 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.865 [2024-07-14 10:08:21.281242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.865 [2024-07-14 10:08:21.281274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.865 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.865 [2024-07-14 10:08:21.297862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.865 [2024-07-14 10:08:21.297895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.865 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.865 [2024-07-14 10:08:21.314769] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.865 [2024-07-14 10:08:21.314803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.865 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.865 [2024-07-14 10:08:21.330524] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.865 [2024-07-14 10:08:21.330558] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.865 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.865 [2024-07-14 10:08:21.348128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.865 [2024-07-14 10:08:21.348161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.865 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.865 [2024-07-14 10:08:21.363537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.865 [2024-07-14 10:08:21.363571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.865 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.865 [2024-07-14 10:08:21.378391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.865 [2024-07-14 10:08:21.378421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.865 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.865 [2024-07-14 10:08:21.393544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.866 [2024-07-14 10:08:21.393578] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:50.866 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:50.866 [2024-07-14 10:08:21.410937] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:50.866 [2024-07-14 10:08:21.410987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.124 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.124 [2024-07-14 10:08:21.428459] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.124 [2024-07-14 10:08:21.428494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.124 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.124 [2024-07-14 10:08:21.443140] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.124 [2024-07-14 10:08:21.443174] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.124 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.124 [2024-07-14 10:08:21.459761] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.124 [2024-07-14 10:08:21.459794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.124 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.124 [2024-07-14 10:08:21.475707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.124 [2024-07-14 10:08:21.475740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.124 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.124 [2024-07-14 10:08:21.493023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.124 [2024-07-14 10:08:21.493057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.124 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.124 [2024-07-14 10:08:21.508738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.124 [2024-07-14 10:08:21.508771] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.124 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.124 [2024-07-14 10:08:21.519790] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.124 [2024-07-14 10:08:21.519823] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.124 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.124 [2024-07-14 10:08:21.535587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.124 [2024-07-14 10:08:21.535620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.124 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.124 [2024-07-14 10:08:21.552319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.124 [2024-07-14 10:08:21.552351] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.124 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.124 [2024-07-14 10:08:21.569738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.124 [2024-07-14 10:08:21.569772] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.124 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.124 [2024-07-14 10:08:21.585964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.124 [2024-07-14 10:08:21.585997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.124 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.124 [2024-07-14 10:08:21.602843] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.124 [2024-07-14 10:08:21.602878] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.124 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.124 [2024-07-14 10:08:21.619711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.124 [2024-07-14 10:08:21.619744] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.124 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.124 [2024-07-14 10:08:21.635674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.124 [2024-07-14 10:08:21.635708] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.124 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.124 [2024-07-14 10:08:21.652709] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.124 [2024-07-14 10:08:21.652743] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.124 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.124 [2024-07-14 10:08:21.668988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.124 [2024-07-14 10:08:21.669021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.383 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.383 [2024-07-14 10:08:21.685852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.383 [2024-07-14 10:08:21.685886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.383 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.383 [2024-07-14 10:08:21.701665] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.383 [2024-07-14 10:08:21.701699] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.383 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.383 [2024-07-14 10:08:21.718885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.383 [2024-07-14 10:08:21.718919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.383 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.383 [2024-07-14 10:08:21.734775] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.383 [2024-07-14 10:08:21.734810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.383 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.383 [2024-07-14 10:08:21.746329] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.383 [2024-07-14 10:08:21.746362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.383 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.383 [2024-07-14 10:08:21.762413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.383 [2024-07-14 10:08:21.762447] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.383 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.383 [2024-07-14 10:08:21.778411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.383 [2024-07-14 10:08:21.778445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.383 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.383 [2024-07-14 10:08:21.795542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.383 [2024-07-14 10:08:21.795575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.383 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.383 [2024-07-14 10:08:21.811729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.383 [2024-07-14 10:08:21.811763] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.383 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.383 [2024-07-14 10:08:21.828887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.383 [2024-07-14 10:08:21.828920] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.383 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.383 [2024-07-14 10:08:21.844523] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.384 [2024-07-14 10:08:21.844557] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.384 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.384 [2024-07-14 10:08:21.855694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.384 [2024-07-14 10:08:21.855727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.384 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.384 [2024-07-14 10:08:21.871458] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.384 [2024-07-14 10:08:21.871491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.384 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.384 [2024-07-14 10:08:21.886767] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.384 [2024-07-14 10:08:21.886803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.384 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.384 [2024-07-14 10:08:21.901662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.384 [2024-07-14 10:08:21.901695] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.384 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.384 [2024-07-14 10:08:21.912867] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.384 [2024-07-14 10:08:21.912900] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.384 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.384 [2024-07-14 10:08:21.928877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.384 [2024-07-14 10:08:21.928911] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.642 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.642 [2024-07-14 10:08:21.945745] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.642 [2024-07-14 10:08:21.945776] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.642 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.642 [2024-07-14 10:08:21.961887] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.642 [2024-07-14 10:08:21.961919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.642 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.642 [2024-07-14 10:08:21.978500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.642 [2024-07-14 10:08:21.978534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.642 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.642 [2024-07-14 10:08:21.994729] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.642 [2024-07-14 10:08:21.994765] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.642 2024/07/14 10:08:21 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.642 [2024-07-14 10:08:22.011791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.642 [2024-07-14 10:08:22.011824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.642 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.642 [2024-07-14 10:08:22.027942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.642 [2024-07-14 10:08:22.027976] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.642 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.642 [2024-07-14 10:08:22.045568] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.642 [2024-07-14 10:08:22.045605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.642 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.643 [2024-07-14 10:08:22.062277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.643 [2024-07-14 10:08:22.062309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.643 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.643 [2024-07-14 10:08:22.078692] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.643 [2024-07-14 10:08:22.078727] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.643 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.643 [2024-07-14 10:08:22.095957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.643 [2024-07-14 10:08:22.095991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.643 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.643 [2024-07-14 10:08:22.112310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.643 [2024-07-14 10:08:22.112353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.643 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.643 [2024-07-14 10:08:22.128324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.643 [2024-07-14 10:08:22.128357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.643 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.643 [2024-07-14 10:08:22.145543] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.643 [2024-07-14 10:08:22.145575] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.643 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.643 [2024-07-14 10:08:22.161449] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.643 [2024-07-14 10:08:22.161482] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.643 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.643 [2024-07-14 10:08:22.177818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.643 [2024-07-14 10:08:22.177850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.643 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.901 [2024-07-14 10:08:22.194280] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.901 [2024-07-14 10:08:22.194314] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.901 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.902 [2024-07-14 10:08:22.212023] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.902 [2024-07-14 10:08:22.212057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.902 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.902 [2024-07-14 10:08:22.226173] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.902 [2024-07-14 10:08:22.226235] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.902 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.902 [2024-07-14 10:08:22.241655] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.902 [2024-07-14 10:08:22.241688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.902 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.902 [2024-07-14 10:08:22.258290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.902 [2024-07-14 10:08:22.258325] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.902 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.902 [2024-07-14 10:08:22.274387] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.902 [2024-07-14 10:08:22.274419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.902 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.902 [2024-07-14 10:08:22.290250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.902 [2024-07-14 10:08:22.290283] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.902 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.902 [2024-07-14 10:08:22.307560] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.902 [2024-07-14 10:08:22.307594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.902 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.902 [2024-07-14 10:08:22.324092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.902 [2024-07-14 10:08:22.324125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.902 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.902 [2024-07-14 10:08:22.340358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.902 [2024-07-14 10:08:22.340390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.902 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.902 [2024-07-14 10:08:22.357014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.902 [2024-07-14 10:08:22.357048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.902 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.902 [2024-07-14 10:08:22.373862] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.902 [2024-07-14 10:08:22.373895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.902 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.902 [2024-07-14 10:08:22.390798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.902 [2024-07-14 10:08:22.390834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.902 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.902 [2024-07-14 10:08:22.406607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.902 [2024-07-14 10:08:22.406665] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.902 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.902 [2024-07-14 10:08:22.418241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.902 [2024-07-14 10:08:22.418272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.902 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:51.902 [2024-07-14 10:08:22.434259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:51.902 [2024-07-14 10:08:22.434291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:51.902 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.161 [2024-07-14 10:08:22.451993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.161 [2024-07-14 10:08:22.452027] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.161 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.161 [2024-07-14 10:08:22.467751] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.161 [2024-07-14 10:08:22.467785] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.161 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.161 [2024-07-14 10:08:22.484092] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.161 [2024-07-14 10:08:22.484125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.161 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.161 [2024-07-14 10:08:22.500870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.161 [2024-07-14 10:08:22.500904] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.161 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.161 [2024-07-14 10:08:22.517254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.161 [2024-07-14 10:08:22.517287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.161 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.161 [2024-07-14 10:08:22.533998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.161 [2024-07-14 10:08:22.534031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.161 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.161 [2024-07-14 10:08:22.550277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.161 [2024-07-14 10:08:22.550309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.161 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.161 [2024-07-14 10:08:22.567236] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.161 [2024-07-14 10:08:22.567293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.161 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.161 [2024-07-14 10:08:22.583384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.161 [2024-07-14 10:08:22.583416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.161 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.161 [2024-07-14 10:08:22.600065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.161 [2024-07-14 10:08:22.600098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.161 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.161 [2024-07-14 10:08:22.616959] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.161 [2024-07-14 10:08:22.616992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.161 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.161 [2024-07-14 10:08:22.633180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.161 [2024-07-14 10:08:22.633227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.161 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.161 [2024-07-14 10:08:22.649684] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.161 [2024-07-14 10:08:22.649717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.161 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.161 [2024-07-14 10:08:22.665731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.161 [2024-07-14 10:08:22.665764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.161 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.162 [2024-07-14 10:08:22.683360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.162 [2024-07-14 10:08:22.683393] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.162 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.162 [2024-07-14 10:08:22.699465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.162 [2024-07-14 10:08:22.699499] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.162 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.420 [2024-07-14 10:08:22.716650] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.420 [2024-07-14 10:08:22.716684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.420 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.420 [2024-07-14 10:08:22.733450] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.420 [2024-07-14 10:08:22.733484] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.420 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.420 [2024-07-14 10:08:22.750982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.420 [2024-07-14 10:08:22.751016] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.420 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.420 [2024-07-14 10:08:22.766606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.420 [2024-07-14 10:08:22.766664] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.420 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.420 [2024-07-14 10:08:22.783777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.420 [2024-07-14 10:08:22.783811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.420 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.420 [2024-07-14 10:08:22.800993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.420 [2024-07-14 10:08:22.801026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.420 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.420 [2024-07-14 10:08:22.816451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.420 [2024-07-14 10:08:22.816483] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.420 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.420 [2024-07-14 10:08:22.833538] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.420 [2024-07-14 10:08:22.833572] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.420 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.420 [2024-07-14 10:08:22.849956] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.420 [2024-07-14 10:08:22.849990] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.420 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.420 [2024-07-14 10:08:22.866875] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.420 [2024-07-14 10:08:22.866926] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.420 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.420 [2024-07-14 10:08:22.883180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.420 [2024-07-14 10:08:22.883258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.420 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.421 [2024-07-14 10:08:22.900324] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.421 [2024-07-14 10:08:22.900357] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.421 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.421 [2024-07-14 10:08:22.917051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.421 [2024-07-14 10:08:22.917083] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.421 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.421 [2024-07-14 10:08:22.932870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.421 [2024-07-14 10:08:22.932903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.421 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.421 [2024-07-14 10:08:22.944209] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.421 [2024-07-14 10:08:22.944274] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.421 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.421 [2024-07-14 10:08:22.959310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.421 [2024-07-14 10:08:22.959341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.421 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.679 [2024-07-14 10:08:22.974238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.679 [2024-07-14 10:08:22.974270] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.679 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.679 [2024-07-14 10:08:22.991067] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.679 [2024-07-14 10:08:22.991102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.679 2024/07/14 10:08:22 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.679 [2024-07-14 10:08:23.007210] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.679 [2024-07-14 10:08:23.007272] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.679 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.679 [2024-07-14 10:08:23.024111] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.679 [2024-07-14 10:08:23.024146] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.679 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.679 [2024-07-14 10:08:23.040861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.679 [2024-07-14 10:08:23.040896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.679 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.679 [2024-07-14 10:08:23.056861] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.679 [2024-07-14 10:08:23.056895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.679 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.679 [2024-07-14 10:08:23.073954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.679 [2024-07-14 10:08:23.073989] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.679 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.679 [2024-07-14 10:08:23.091114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.679 [2024-07-14 10:08:23.091148] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.679 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.679 [2024-07-14 10:08:23.107869] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.679 [2024-07-14 10:08:23.107903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.679 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.679 [2024-07-14 10:08:23.124707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.679 [2024-07-14 10:08:23.124739] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.679 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.679 [2024-07-14 10:08:23.141828] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.679 [2024-07-14 10:08:23.141863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.679 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.679 [2024-07-14 10:08:23.158017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.679 [2024-07-14 10:08:23.158049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.679 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.679 [2024-07-14 10:08:23.174579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.679 [2024-07-14 10:08:23.174612] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.679 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.679 [2024-07-14 10:08:23.191408] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.679 [2024-07-14 10:08:23.191441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.679 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.679 [2024-07-14 10:08:23.208041] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.679 [2024-07-14 10:08:23.208069] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.679 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.679 [2024-07-14 10:08:23.222738] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.679 [2024-07-14 10:08:23.222773] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.948 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.948 [2024-07-14 10:08:23.237411] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.948 [2024-07-14 10:08:23.237446] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.948 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.948 [2024-07-14 10:08:23.249193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.948 [2024-07-14 10:08:23.249287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.948 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.948 [2024-07-14 10:08:23.265263] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.948 [2024-07-14 10:08:23.265297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.948 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.948 [2024-07-14 10:08:23.281594] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.948 [2024-07-14 10:08:23.281643] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.948 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.948 [2024-07-14 10:08:23.299306] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.948 [2024-07-14 10:08:23.299339] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.948 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.948 [2024-07-14 10:08:23.315863] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.948 [2024-07-14 10:08:23.315896] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.948 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.948 [2024-07-14 10:08:23.332057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.948 [2024-07-14 10:08:23.332092] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.948 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.948 [2024-07-14 10:08:23.349726] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.948 [2024-07-14 10:08:23.349760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.948 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.948 [2024-07-14 10:08:23.364139] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.948 [2024-07-14 10:08:23.364173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.948 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.948 [2024-07-14 10:08:23.380313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.948 [2024-07-14 10:08:23.380347] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.948 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.948 [2024-07-14 10:08:23.396404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.948 [2024-07-14 10:08:23.396437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.948 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.948 [2024-07-14 10:08:23.412803] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.948 [2024-07-14 10:08:23.412836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.948 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.948 [2024-07-14 10:08:23.429900] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.948 [2024-07-14 10:08:23.429934] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.948 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.948 [2024-07-14 10:08:23.445964] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.948 [2024-07-14 10:08:23.445997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.948 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.948 [2024-07-14 10:08:23.463265] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.948 [2024-07-14 10:08:23.463297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.948 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:52.948 [2024-07-14 10:08:23.479477] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:52.948 [2024-07-14 10:08:23.479512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:52.948 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.222 [2024-07-14 10:08:23.497319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.222 [2024-07-14 10:08:23.497354] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.222 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.222 [2024-07-14 10:08:23.512624] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.222 [2024-07-14 10:08:23.512658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.222 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.222 [2024-07-14 10:08:23.523960] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.222 [2024-07-14 10:08:23.523994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.222 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.222 [2024-07-14 10:08:23.539979] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.222 [2024-07-14 10:08:23.540012] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.222 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.222 [2024-07-14 10:08:23.556455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.222 [2024-07-14 10:08:23.556489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.222 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.222 [2024-07-14 10:08:23.573051] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.222 [2024-07-14 10:08:23.573084] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.222 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.222 [2024-07-14 10:08:23.589407] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.222 [2024-07-14 10:08:23.589439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.222 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.222 [2024-07-14 10:08:23.606307] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.222 [2024-07-14 10:08:23.606340] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.222 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.222 [2024-07-14 10:08:23.622557] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.222 [2024-07-14 10:08:23.622589] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.222 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.222 [2024-07-14 10:08:23.639295] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.222 [2024-07-14 10:08:23.639328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.222 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.223 [2024-07-14 10:08:23.655991] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.223 [2024-07-14 10:08:23.656025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.223 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.223 [2024-07-14 10:08:23.672078] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.223 [2024-07-14 10:08:23.672112] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.223 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.223 [2024-07-14 10:08:23.688472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.223 [2024-07-14 10:08:23.688503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.223 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.223 [2024-07-14 10:08:23.704776] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.223 [2024-07-14 10:08:23.704809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.223 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.223 [2024-07-14 10:08:23.722417] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.223 [2024-07-14 10:08:23.722449] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.223 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.223 [2024-07-14 10:08:23.738389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.223 [2024-07-14 10:08:23.738421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.223 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.223 [2024-07-14 10:08:23.754873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.223 [2024-07-14 10:08:23.754910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.223 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.481 [2024-07-14 10:08:23.769254] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.481 [2024-07-14 10:08:23.769287] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.481 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.481 [2024-07-14 10:08:23.783921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.481 [2024-07-14 10:08:23.783954] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.481 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.481 [2024-07-14 10:08:23.798730] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.481 [2024-07-14 10:08:23.798766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.481 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.481 [2024-07-14 10:08:23.809993] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.481 [2024-07-14 10:08:23.810026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.481 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.481 [2024-07-14 10:08:23.826082] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.481 [2024-07-14 10:08:23.826116] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.481 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.481 [2024-07-14 10:08:23.842406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.481 [2024-07-14 10:08:23.842440] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.481 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.481 [2024-07-14 10:08:23.858990] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.481 [2024-07-14 10:08:23.859025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.481 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.481 [2024-07-14 10:08:23.875649] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.481 [2024-07-14 10:08:23.875682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.481 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.481 [2024-07-14 10:08:23.892135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.481 [2024-07-14 10:08:23.892169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.481 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.481 [2024-07-14 10:08:23.908954] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.481 [2024-07-14 10:08:23.908988] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.481 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.481 [2024-07-14 10:08:23.925453] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.481 [2024-07-14 10:08:23.925486] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.481 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.481 [2024-07-14 10:08:23.942346] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.481 [2024-07-14 10:08:23.942379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.481 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.481 [2024-07-14 10:08:23.958456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.481 [2024-07-14 10:08:23.958491] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.481 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.481 [2024-07-14 10:08:23.975811] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.481 [2024-07-14 10:08:23.975845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.481 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.481 [2024-07-14 10:08:23.992048] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.481 [2024-07-14 10:08:23.992082] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.481 2024/07/14 10:08:23 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.481 [2024-07-14 10:08:24.008248] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.481 [2024-07-14 10:08:24.008280] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.481 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.481 [2024-07-14 10:08:24.024885] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.481 [2024-07-14 10:08:24.024919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.740 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.740 [2024-07-14 10:08:24.042009] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.740 [2024-07-14 10:08:24.042042] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.740 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.740 [2024-07-14 10:08:24.058619] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.740 [2024-07-14 10:08:24.058681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.740 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.740 [2024-07-14 10:08:24.075559] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.740 [2024-07-14 10:08:24.075593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.740 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.740 [2024-07-14 10:08:24.092150] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.740 [2024-07-14 10:08:24.092184] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.740 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.740 [2024-07-14 10:08:24.108623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.740 [2024-07-14 10:08:24.108656] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.740 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.740 [2024-07-14 10:08:24.125637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.740 [2024-07-14 10:08:24.125670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.740 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.740 [2024-07-14 10:08:24.142252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.740 [2024-07-14 10:08:24.142284] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.740 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.740 [2024-07-14 10:08:24.158375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.740 [2024-07-14 10:08:24.158408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.740 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.740 [2024-07-14 10:08:24.175944] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.740 [2024-07-14 10:08:24.175978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.740 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.740 [2024-07-14 10:08:24.191266] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.740 [2024-07-14 10:08:24.191297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.740 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.740 [2024-07-14 10:08:24.202708] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.740 [2024-07-14 10:08:24.202742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.740 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.740 [2024-07-14 10:08:24.218579] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.740 [2024-07-14 10:08:24.218614] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.740 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.740 [2024-07-14 10:08:24.234863] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.740 [2024-07-14 10:08:24.234914] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.740 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.740 [2024-07-14 10:08:24.251433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.740 [2024-07-14 10:08:24.251467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.740 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.740 [2024-07-14 10:08:24.267562] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.740 [2024-07-14 10:08:24.267597] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.740 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.740 [2024-07-14 10:08:24.284922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.740 [2024-07-14 10:08:24.284957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.998 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.999 [2024-07-14 10:08:24.301871] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.999 [2024-07-14 10:08:24.301903] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.999 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.999 [2024-07-14 10:08:24.317319] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.999 [2024-07-14 10:08:24.317353] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.999 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.999 [2024-07-14 10:08:24.329063] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.999 [2024-07-14 10:08:24.329099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.999 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.999 [2024-07-14 10:08:24.345343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.999 [2024-07-14 10:08:24.345376] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.999 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.999 [2024-07-14 10:08:24.361621] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.999 [2024-07-14 10:08:24.361654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.999 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.999 [2024-07-14 10:08:24.378897] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.999 [2024-07-14 10:08:24.378932] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.999 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.999 [2024-07-14 10:08:24.395688] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.999 [2024-07-14 10:08:24.395721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.999 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.999 [2024-07-14 10:08:24.412763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.999 [2024-07-14 10:08:24.412797] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.999 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.999 [2024-07-14 10:08:24.428662] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.999 [2024-07-14 10:08:24.428696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.999 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.999 [2024-07-14 10:08:24.445601] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.999 [2024-07-14 10:08:24.445634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.999 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.999 [2024-07-14 10:08:24.461193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.999 [2024-07-14 10:08:24.461273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.999 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.999 [2024-07-14 10:08:24.472456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.999 [2024-07-14 10:08:24.472489] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.999 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.999 [2024-07-14 10:08:24.487873] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.999 [2024-07-14 10:08:24.487907] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.999 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.999 [2024-07-14 10:08:24.504313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.999 [2024-07-14 10:08:24.504341] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.999 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.999 [2024-07-14 10:08:24.520571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.999 [2024-07-14 10:08:24.520603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.999 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:53.999 [2024-07-14 10:08:24.536818] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:53.999 [2024-07-14 10:08:24.536850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:53.999 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.258 [2024-07-14 10:08:24.554064] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.258 [2024-07-14 10:08:24.554098] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.258 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.258 [2024-07-14 10:08:24.570539] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.258 [2024-07-14 10:08:24.570571] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.258 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.258 [2024-07-14 10:08:24.587472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.258 [2024-07-14 10:08:24.587507] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.258 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.258 [2024-07-14 10:08:24.603795] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.258 [2024-07-14 10:08:24.603828] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.258 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.258 [2024-07-14 10:08:24.619973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.258 [2024-07-14 10:08:24.620006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.258 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.258 [2024-07-14 10:08:24.637027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.258 [2024-07-14 10:08:24.637060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.258 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.258 [2024-07-14 10:08:24.653168] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.258 [2024-07-14 10:08:24.653247] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.258 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.258 [2024-07-14 10:08:24.670135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.258 [2024-07-14 10:08:24.670168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.258 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.258 [2024-07-14 10:08:24.686988] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.258 [2024-07-14 10:08:24.687021] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.258 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.258 [2024-07-14 10:08:24.703317] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.258 [2024-07-14 10:08:24.703349] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.258 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.258 [2024-07-14 10:08:24.720278] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.258 [2024-07-14 10:08:24.720311] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.258 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.258 [2024-07-14 10:08:24.735946] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.258 [2024-07-14 10:08:24.735980] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.258 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.258 [2024-07-14 10:08:24.746917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.258 [2024-07-14 10:08:24.746966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.258 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.258 [2024-07-14 10:08:24.763053] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.258 [2024-07-14 10:08:24.763086] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.258 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.258 [2024-07-14 10:08:24.778661] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.258 [2024-07-14 10:08:24.778696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.258 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.258 [2024-07-14 10:08:24.789725] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.258 [2024-07-14 10:08:24.789757] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.258 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.517 [2024-07-14 10:08:24.805337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.517 [2024-07-14 10:08:24.805370] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.517 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.517 [2024-07-14 10:08:24.821917] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.517 [2024-07-14 10:08:24.821950] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.517 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.517 [2024-07-14 10:08:24.838521] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.517 [2024-07-14 10:08:24.838554] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.517 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.517 [2024-07-14 10:08:24.855180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.517 [2024-07-14 10:08:24.855244] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.517 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.517 [2024-07-14 10:08:24.872134] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.517 [2024-07-14 10:08:24.872167] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.517 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.517 [2024-07-14 10:08:24.889213] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.517 [2024-07-14 10:08:24.889256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.517 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.517 [2024-07-14 10:08:24.905555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.517 [2024-07-14 10:08:24.905588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.517 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.517 [2024-07-14 10:08:24.922479] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.517 [2024-07-14 10:08:24.922512] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.517 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.517 [2024-07-14 10:08:24.939145] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.517 [2024-07-14 10:08:24.939179] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.517 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.517 [2024-07-14 10:08:24.955611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.517 [2024-07-14 10:08:24.955645] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.517 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.517 [2024-07-14 10:08:24.966472] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.517 [2024-07-14 10:08:24.966504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.517 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.517 [2024-07-14 10:08:24.982272] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.517 [2024-07-14 10:08:24.982304] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.517 2024/07/14 10:08:24 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.517 [2024-07-14 10:08:24.999207] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.517 [2024-07-14 10:08:24.999286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.517 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.517 [2024-07-14 10:08:25.015275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.517 [2024-07-14 10:08:25.015307] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.517 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.517 [2024-07-14 10:08:25.031658] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.517 [2024-07-14 10:08:25.031691] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.517 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.517 [2024-07-14 10:08:25.048702] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.517 [2024-07-14 10:08:25.048735] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.517 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.776 [2024-07-14 10:08:25.065135] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.776 [2024-07-14 10:08:25.065168] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.776 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.776 [2024-07-14 10:08:25.081701] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.776 [2024-07-14 10:08:25.081734] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.776 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.776 [2024-07-14 10:08:25.098732] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.776 [2024-07-14 10:08:25.098767] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.776 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.776 [2024-07-14 10:08:25.114942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.776 [2024-07-14 10:08:25.114991] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.776 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.776 [2024-07-14 10:08:25.131589] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.776 [2024-07-14 10:08:25.131622] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.776 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.776 [2024-07-14 10:08:25.147894] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.776 [2024-07-14 10:08:25.147927] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.776 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.777 [2024-07-14 10:08:25.163932] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.777 [2024-07-14 10:08:25.163965] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.777 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.777 [2024-07-14 10:08:25.175530] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.777 [2024-07-14 10:08:25.175562] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.777 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.777 [2024-07-14 10:08:25.190352] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.777 [2024-07-14 10:08:25.190385] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.777 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.777 [2024-07-14 10:08:25.201406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.777 [2024-07-14 10:08:25.201438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.777 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.777 [2024-07-14 10:08:25.216721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.777 [2024-07-14 10:08:25.216755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.777 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.777 [2024-07-14 10:08:25.233674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.777 [2024-07-14 10:08:25.233706] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.777 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.777 [2024-07-14 10:08:25.250395] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.777 [2024-07-14 10:08:25.250427] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.777 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.777 [2024-07-14 10:08:25.266375] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.777 [2024-07-14 10:08:25.266407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.777 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.777 [2024-07-14 10:08:25.283696] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.777 [2024-07-14 10:08:25.283729] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.777 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.777 [2024-07-14 10:08:25.299165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.777 [2024-07-14 10:08:25.299228] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.777 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:54.777 [2024-07-14 10:08:25.311223] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:54.777 [2024-07-14 10:08:25.311268] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:54.777 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.036 [2024-07-14 10:08:25.327587] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.036 [2024-07-14 10:08:25.327620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.036 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.036 [2024-07-14 10:08:25.343478] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.036 [2024-07-14 10:08:25.343510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.036 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.036 [2024-07-14 10:08:25.355721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.036 [2024-07-14 10:08:25.355755] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.036 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.036 [2024-07-14 10:08:25.371447] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.036 [2024-07-14 10:08:25.371481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.036 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.036 [2024-07-14 10:08:25.388116] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.036 [2024-07-14 10:08:25.388149] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.036 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.036 [2024-07-14 10:08:25.404358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.036 [2024-07-14 10:08:25.404390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.036 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.036 [2024-07-14 10:08:25.421841] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.036 [2024-07-14 10:08:25.421875] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.036 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.036 [2024-07-14 10:08:25.437461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.036 [2024-07-14 10:08:25.437496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.036 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.036 [2024-07-14 10:08:25.448625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.036 [2024-07-14 10:08:25.448658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.036 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.036 [2024-07-14 10:08:25.464938] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.036 [2024-07-14 10:08:25.464981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.036 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.036 [2024-07-14 10:08:25.481297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.036 [2024-07-14 10:08:25.481331] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.036 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.036 [2024-07-14 10:08:25.498277] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.036 [2024-07-14 10:08:25.498309] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.036 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.036 [2024-07-14 10:08:25.514787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.036 [2024-07-14 10:08:25.514818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.036 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.036 [2024-07-14 10:08:25.531193] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.036 [2024-07-14 10:08:25.531273] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.036 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.036 [2024-07-14 10:08:25.542778] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.036 [2024-07-14 10:08:25.542811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.036 00:15:55.036 Latency(us) 00:15:55.036 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:55.036 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:15:55.036 Nvme1n1 : 5.01 13625.61 106.45 0.00 0.00 9383.09 4140.68 21924.77 00:15:55.036 =================================================================================================================== 00:15:55.036 Total : 13625.61 106.45 0.00 0.00 9383.09 4140.68 21924.77 00:15:55.036 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.036 [2024-07-14 10:08:25.554771] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.036 [2024-07-14 10:08:25.554803] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.036 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.036 [2024-07-14 10:08:25.566786] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.036 [2024-07-14 10:08:25.566821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.036 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.036 [2024-07-14 10:08:25.578801] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.036 [2024-07-14 10:08:25.578836] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.294 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.294 [2024-07-14 10:08:25.590796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.294 [2024-07-14 10:08:25.590833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.294 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.294 [2024-07-14 10:08:25.602798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.294 [2024-07-14 10:08:25.602833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.294 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.295 [2024-07-14 10:08:25.614799] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.295 [2024-07-14 10:08:25.614835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.295 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.295 [2024-07-14 10:08:25.626807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.295 [2024-07-14 10:08:25.626847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.295 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.295 [2024-07-14 10:08:25.638813] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.295 [2024-07-14 10:08:25.638847] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.295 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.295 [2024-07-14 10:08:25.650816] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.295 [2024-07-14 10:08:25.650854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.295 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.295 [2024-07-14 10:08:25.662811] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.295 [2024-07-14 10:08:25.662845] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.295 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.295 [2024-07-14 10:08:25.674810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.295 [2024-07-14 10:08:25.674842] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.295 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.295 [2024-07-14 10:08:25.686800] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.295 [2024-07-14 10:08:25.686830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.295 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.295 [2024-07-14 10:08:25.698826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.295 [2024-07-14 10:08:25.698860] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.295 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.295 [2024-07-14 10:08:25.710802] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.295 [2024-07-14 10:08:25.710829] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.295 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.295 [2024-07-14 10:08:25.722829] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.295 [2024-07-14 10:08:25.722863] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.295 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.295 [2024-07-14 10:08:25.734807] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.295 [2024-07-14 10:08:25.734833] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.295 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.295 [2024-07-14 10:08:25.746822] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:15:55.295 [2024-07-14 10:08:25.746850] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:15:55.295 2024/07/14 10:08:25 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:15:55.295 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (85822) - No such process 00:15:55.295 10:08:25 -- target/zcopy.sh@49 -- # wait 85822 00:15:55.295 10:08:25 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:55.295 10:08:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:55.295 10:08:25 -- common/autotest_common.sh@10 -- # set +x 00:15:55.295 10:08:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:55.295 10:08:25 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:55.295 10:08:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:55.295 10:08:25 -- common/autotest_common.sh@10 -- # set +x 00:15:55.295 delay0 00:15:55.295 10:08:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:55.295 10:08:25 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:15:55.295 10:08:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:15:55.295 10:08:25 -- common/autotest_common.sh@10 -- # set +x 00:15:55.295 10:08:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:15:55.295 10:08:25 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:15:55.553 [2024-07-14 10:08:25.946219] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:02.111 Initializing NVMe Controllers 00:16:02.111 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:02.111 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:02.111 Initialization complete. Launching workers. 00:16:02.111 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 66 00:16:02.111 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 353, failed to submit 33 00:16:02.111 success 177, unsuccess 176, failed 0 00:16:02.111 10:08:31 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:16:02.111 10:08:31 -- target/zcopy.sh@60 -- # nvmftestfini 00:16:02.111 10:08:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:02.111 10:08:31 -- nvmf/common.sh@116 -- # sync 00:16:02.111 10:08:32 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:02.111 10:08:32 -- nvmf/common.sh@119 -- # set +e 00:16:02.111 10:08:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:02.111 10:08:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:02.111 rmmod nvme_tcp 00:16:02.111 rmmod nvme_fabrics 00:16:02.111 rmmod nvme_keyring 00:16:02.111 10:08:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:02.111 10:08:32 -- nvmf/common.sh@123 -- # set -e 00:16:02.111 10:08:32 -- nvmf/common.sh@124 -- # return 0 00:16:02.111 10:08:32 -- nvmf/common.sh@477 -- # '[' -n 85659 ']' 00:16:02.111 10:08:32 -- nvmf/common.sh@478 -- # killprocess 85659 00:16:02.111 10:08:32 -- common/autotest_common.sh@926 -- # '[' -z 85659 ']' 00:16:02.111 10:08:32 -- common/autotest_common.sh@930 -- # kill -0 85659 00:16:02.111 10:08:32 -- common/autotest_common.sh@931 -- # uname 00:16:02.111 10:08:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:02.111 10:08:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 85659 00:16:02.111 killing process with pid 85659 00:16:02.111 10:08:32 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:02.111 10:08:32 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:02.111 10:08:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 85659' 00:16:02.111 10:08:32 -- common/autotest_common.sh@945 -- # kill 85659 00:16:02.111 10:08:32 -- common/autotest_common.sh@950 -- # wait 85659 00:16:02.111 10:08:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:02.111 10:08:32 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:02.111 10:08:32 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:02.111 10:08:32 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:02.111 10:08:32 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:02.111 10:08:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.111 10:08:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:02.111 10:08:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.111 10:08:32 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:02.111 00:16:02.111 real 0m24.504s 00:16:02.111 user 0m39.798s 00:16:02.111 sys 0m6.477s 00:16:02.111 10:08:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:02.111 ************************************ 00:16:02.111 END TEST nvmf_zcopy 00:16:02.111 ************************************ 00:16:02.111 10:08:32 -- common/autotest_common.sh@10 -- # set +x 00:16:02.111 10:08:32 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:02.111 10:08:32 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:02.111 10:08:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:02.111 10:08:32 -- common/autotest_common.sh@10 -- # set +x 00:16:02.111 ************************************ 00:16:02.111 START TEST nvmf_nmic 00:16:02.111 ************************************ 00:16:02.111 10:08:32 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:16:02.111 * Looking for test storage... 00:16:02.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:02.111 10:08:32 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:02.111 10:08:32 -- nvmf/common.sh@7 -- # uname -s 00:16:02.111 10:08:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:02.111 10:08:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:02.111 10:08:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:02.111 10:08:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:02.111 10:08:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:02.111 10:08:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:02.111 10:08:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:02.111 10:08:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:02.111 10:08:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:02.111 10:08:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:02.111 10:08:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:16:02.111 10:08:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:16:02.111 10:08:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:02.111 10:08:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:02.111 10:08:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:02.111 10:08:32 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:02.111 10:08:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:02.111 10:08:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:02.111 10:08:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:02.111 10:08:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.111 10:08:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.111 10:08:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.111 10:08:32 -- paths/export.sh@5 -- # export PATH 00:16:02.111 10:08:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.111 10:08:32 -- nvmf/common.sh@46 -- # : 0 00:16:02.111 10:08:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:02.111 10:08:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:02.111 10:08:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:02.111 10:08:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:02.111 10:08:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:02.111 10:08:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:02.111 10:08:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:02.111 10:08:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:02.111 10:08:32 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:02.111 10:08:32 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:02.111 10:08:32 -- target/nmic.sh@14 -- # nvmftestinit 00:16:02.111 10:08:32 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:02.111 10:08:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:02.111 10:08:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:02.111 10:08:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:02.111 10:08:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:02.111 10:08:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.111 10:08:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:02.111 10:08:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.111 10:08:32 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:02.111 10:08:32 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:02.111 10:08:32 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:02.111 10:08:32 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:02.112 10:08:32 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:02.112 10:08:32 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:02.112 10:08:32 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:02.112 10:08:32 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:02.112 10:08:32 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:02.112 10:08:32 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:02.112 10:08:32 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:02.112 10:08:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:02.112 10:08:32 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:02.112 10:08:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:02.112 10:08:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:02.112 10:08:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:02.112 10:08:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:02.112 10:08:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:02.112 10:08:32 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:02.112 10:08:32 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:02.112 Cannot find device "nvmf_tgt_br" 00:16:02.112 10:08:32 -- nvmf/common.sh@154 -- # true 00:16:02.112 10:08:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:02.112 Cannot find device "nvmf_tgt_br2" 00:16:02.112 10:08:32 -- nvmf/common.sh@155 -- # true 00:16:02.112 10:08:32 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:02.112 10:08:32 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:02.112 Cannot find device "nvmf_tgt_br" 00:16:02.112 10:08:32 -- nvmf/common.sh@157 -- # true 00:16:02.112 10:08:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:02.112 Cannot find device "nvmf_tgt_br2" 00:16:02.112 10:08:32 -- nvmf/common.sh@158 -- # true 00:16:02.112 10:08:32 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:02.370 10:08:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:02.370 10:08:32 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:02.370 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:02.370 10:08:32 -- nvmf/common.sh@161 -- # true 00:16:02.370 10:08:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:02.370 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:02.370 10:08:32 -- nvmf/common.sh@162 -- # true 00:16:02.370 10:08:32 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:02.370 10:08:32 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:02.370 10:08:32 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:02.370 10:08:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:02.370 10:08:32 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:02.370 10:08:32 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:02.370 10:08:32 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:02.370 10:08:32 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:02.370 10:08:32 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:02.370 10:08:32 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:02.370 10:08:32 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:02.370 10:08:32 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:02.370 10:08:32 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:02.370 10:08:32 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:02.370 10:08:32 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:02.370 10:08:32 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:02.370 10:08:32 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:02.370 10:08:32 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:02.370 10:08:32 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:02.370 10:08:32 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:02.370 10:08:32 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:02.370 10:08:32 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:02.370 10:08:32 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:02.370 10:08:32 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:02.370 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:02.370 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:16:02.370 00:16:02.370 --- 10.0.0.2 ping statistics --- 00:16:02.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.370 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:16:02.370 10:08:32 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:02.370 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:02.370 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:16:02.370 00:16:02.370 --- 10.0.0.3 ping statistics --- 00:16:02.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.370 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:16:02.370 10:08:32 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:02.370 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:02.370 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:16:02.370 00:16:02.370 --- 10.0.0.1 ping statistics --- 00:16:02.370 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:02.370 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:16:02.370 10:08:32 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:02.371 10:08:32 -- nvmf/common.sh@421 -- # return 0 00:16:02.371 10:08:32 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:02.371 10:08:32 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:02.371 10:08:32 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:02.371 10:08:32 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:02.371 10:08:32 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:02.371 10:08:32 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:02.371 10:08:32 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:02.371 10:08:32 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:16:02.371 10:08:32 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:02.371 10:08:32 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:02.371 10:08:32 -- common/autotest_common.sh@10 -- # set +x 00:16:02.371 10:08:32 -- nvmf/common.sh@469 -- # nvmfpid=86141 00:16:02.371 10:08:32 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:02.371 10:08:32 -- nvmf/common.sh@470 -- # waitforlisten 86141 00:16:02.371 10:08:32 -- common/autotest_common.sh@819 -- # '[' -z 86141 ']' 00:16:02.371 10:08:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.371 10:08:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:02.371 10:08:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.371 10:08:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:02.371 10:08:32 -- common/autotest_common.sh@10 -- # set +x 00:16:02.628 [2024-07-14 10:08:32.957193] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:02.629 [2024-07-14 10:08:32.957316] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:02.629 [2024-07-14 10:08:33.096057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:02.629 [2024-07-14 10:08:33.173700] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:02.629 [2024-07-14 10:08:33.174139] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:02.629 [2024-07-14 10:08:33.174298] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:02.629 [2024-07-14 10:08:33.174419] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:02.629 [2024-07-14 10:08:33.174857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:02.629 [2024-07-14 10:08:33.174988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:02.896 [2024-07-14 10:08:33.175559] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:02.896 [2024-07-14 10:08:33.175629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:03.462 10:08:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:03.462 10:08:33 -- common/autotest_common.sh@852 -- # return 0 00:16:03.462 10:08:33 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:03.462 10:08:33 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:03.462 10:08:33 -- common/autotest_common.sh@10 -- # set +x 00:16:03.462 10:08:33 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:03.462 10:08:33 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:03.462 10:08:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:03.462 10:08:33 -- common/autotest_common.sh@10 -- # set +x 00:16:03.462 [2024-07-14 10:08:33.954722] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:03.462 10:08:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:03.462 10:08:33 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:03.462 10:08:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:03.462 10:08:33 -- common/autotest_common.sh@10 -- # set +x 00:16:03.462 Malloc0 00:16:03.462 10:08:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:03.462 10:08:33 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:03.462 10:08:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:03.462 10:08:33 -- common/autotest_common.sh@10 -- # set +x 00:16:03.720 10:08:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:03.720 10:08:34 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:03.720 10:08:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:03.720 10:08:34 -- common/autotest_common.sh@10 -- # set +x 00:16:03.720 10:08:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:03.720 10:08:34 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:03.720 10:08:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:03.720 10:08:34 -- common/autotest_common.sh@10 -- # set +x 00:16:03.720 [2024-07-14 10:08:34.022486] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:03.720 10:08:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:03.720 test case1: single bdev can't be used in multiple subsystems 00:16:03.720 10:08:34 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:16:03.720 10:08:34 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:16:03.720 10:08:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:03.720 10:08:34 -- common/autotest_common.sh@10 -- # set +x 00:16:03.720 10:08:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:03.720 10:08:34 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:16:03.720 10:08:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:03.720 10:08:34 -- common/autotest_common.sh@10 -- # set +x 00:16:03.720 10:08:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:03.720 10:08:34 -- target/nmic.sh@28 -- # nmic_status=0 00:16:03.720 10:08:34 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:16:03.720 10:08:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:03.720 10:08:34 -- common/autotest_common.sh@10 -- # set +x 00:16:03.720 [2024-07-14 10:08:34.046328] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:16:03.720 [2024-07-14 10:08:34.046474] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:16:03.720 [2024-07-14 10:08:34.046608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:03.720 2024/07/14 10:08:34 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:16:03.720 request: 00:16:03.720 { 00:16:03.720 "method": "nvmf_subsystem_add_ns", 00:16:03.720 "params": { 00:16:03.720 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:16:03.720 "namespace": { 00:16:03.720 "bdev_name": "Malloc0" 00:16:03.720 } 00:16:03.720 } 00:16:03.720 } 00:16:03.720 Got JSON-RPC error response 00:16:03.720 GoRPCClient: error on JSON-RPC call 00:16:03.720 Adding namespace failed - expected result. 00:16:03.720 test case2: host connect to nvmf target in multiple paths 00:16:03.720 10:08:34 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:03.720 10:08:34 -- target/nmic.sh@29 -- # nmic_status=1 00:16:03.720 10:08:34 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:16:03.720 10:08:34 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:16:03.720 10:08:34 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:16:03.720 10:08:34 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:03.720 10:08:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:03.720 10:08:34 -- common/autotest_common.sh@10 -- # set +x 00:16:03.720 [2024-07-14 10:08:34.058439] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:03.720 10:08:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:03.720 10:08:34 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:03.720 10:08:34 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:16:03.978 10:08:34 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:16:03.978 10:08:34 -- common/autotest_common.sh@1177 -- # local i=0 00:16:03.978 10:08:34 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:16:03.978 10:08:34 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:16:03.978 10:08:34 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:05.880 10:08:36 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:05.880 10:08:36 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:05.880 10:08:36 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:05.880 10:08:36 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:16:05.880 10:08:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:05.880 10:08:36 -- common/autotest_common.sh@1187 -- # return 0 00:16:05.880 10:08:36 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:06.138 [global] 00:16:06.138 thread=1 00:16:06.138 invalidate=1 00:16:06.138 rw=write 00:16:06.138 time_based=1 00:16:06.138 runtime=1 00:16:06.138 ioengine=libaio 00:16:06.138 direct=1 00:16:06.138 bs=4096 00:16:06.138 iodepth=1 00:16:06.138 norandommap=0 00:16:06.138 numjobs=1 00:16:06.138 00:16:06.138 verify_dump=1 00:16:06.138 verify_backlog=512 00:16:06.138 verify_state_save=0 00:16:06.138 do_verify=1 00:16:06.138 verify=crc32c-intel 00:16:06.138 [job0] 00:16:06.138 filename=/dev/nvme0n1 00:16:06.138 Could not set queue depth (nvme0n1) 00:16:06.138 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:06.138 fio-3.35 00:16:06.139 Starting 1 thread 00:16:07.513 00:16:07.513 job0: (groupid=0, jobs=1): err= 0: pid=86245: Sun Jul 14 10:08:37 2024 00:16:07.513 read: IOPS=3294, BW=12.9MiB/s (13.5MB/s)(12.9MiB/1001msec) 00:16:07.513 slat (nsec): min=12030, max=89481, avg=16515.44, stdev=5939.78 00:16:07.513 clat (usec): min=109, max=2819, avg=142.64, stdev=76.02 00:16:07.513 lat (usec): min=127, max=2842, avg=159.16, stdev=76.57 00:16:07.513 clat percentiles (usec): 00:16:07.513 | 1.00th=[ 120], 5.00th=[ 123], 10.00th=[ 125], 20.00th=[ 128], 00:16:07.513 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 141], 00:16:07.513 | 70.00th=[ 145], 80.00th=[ 153], 90.00th=[ 163], 95.00th=[ 172], 00:16:07.513 | 99.00th=[ 186], 99.50th=[ 192], 99.90th=[ 1549], 99.95th=[ 2278], 00:16:07.513 | 99.99th=[ 2835] 00:16:07.513 write: IOPS=3580, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1001msec); 0 zone resets 00:16:07.513 slat (usec): min=18, max=186, avg=24.38, stdev= 8.42 00:16:07.513 clat (usec): min=81, max=7142, avg=104.30, stdev=122.98 00:16:07.513 lat (usec): min=101, max=7169, avg=128.68, stdev=123.46 00:16:07.513 clat percentiles (usec): 00:16:07.513 | 1.00th=[ 85], 5.00th=[ 88], 10.00th=[ 89], 20.00th=[ 92], 00:16:07.513 | 30.00th=[ 93], 40.00th=[ 95], 50.00th=[ 97], 60.00th=[ 100], 00:16:07.513 | 70.00th=[ 104], 80.00th=[ 111], 90.00th=[ 121], 95.00th=[ 130], 00:16:07.513 | 99.00th=[ 147], 99.50th=[ 155], 99.90th=[ 709], 99.95th=[ 1549], 00:16:07.513 | 99.99th=[ 7111] 00:16:07.513 bw ( KiB/s): min=13912, max=13912, per=97.14%, avg=13912.00, stdev= 0.00, samples=1 00:16:07.513 iops : min= 3478, max= 3478, avg=3478.00, stdev= 0.00, samples=1 00:16:07.513 lat (usec) : 100=32.24%, 250=67.60%, 500=0.03%, 750=0.03% 00:16:07.513 lat (msec) : 2=0.04%, 4=0.04%, 10=0.01% 00:16:07.513 cpu : usr=2.30%, sys=11.00%, ctx=6884, majf=0, minf=2 00:16:07.513 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:07.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:07.513 issued rwts: total=3298,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:07.513 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:07.513 00:16:07.513 Run status group 0 (all jobs): 00:16:07.513 READ: bw=12.9MiB/s (13.5MB/s), 12.9MiB/s-12.9MiB/s (13.5MB/s-13.5MB/s), io=12.9MiB (13.5MB), run=1001-1001msec 00:16:07.513 WRITE: bw=14.0MiB/s (14.7MB/s), 14.0MiB/s-14.0MiB/s (14.7MB/s-14.7MB/s), io=14.0MiB (14.7MB), run=1001-1001msec 00:16:07.513 00:16:07.513 Disk stats (read/write): 00:16:07.513 nvme0n1: ios=3099/3072, merge=0/0, ticks=488/381, in_queue=869, util=90.38% 00:16:07.513 10:08:37 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:07.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:16:07.513 10:08:37 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:07.513 10:08:37 -- common/autotest_common.sh@1198 -- # local i=0 00:16:07.513 10:08:37 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:07.513 10:08:37 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:07.513 10:08:37 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:07.513 10:08:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:07.513 10:08:37 -- common/autotest_common.sh@1210 -- # return 0 00:16:07.513 10:08:37 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:16:07.513 10:08:37 -- target/nmic.sh@53 -- # nvmftestfini 00:16:07.513 10:08:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:07.513 10:08:37 -- nvmf/common.sh@116 -- # sync 00:16:07.513 10:08:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:07.513 10:08:37 -- nvmf/common.sh@119 -- # set +e 00:16:07.513 10:08:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:07.513 10:08:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:07.513 rmmod nvme_tcp 00:16:07.513 rmmod nvme_fabrics 00:16:07.513 rmmod nvme_keyring 00:16:07.513 10:08:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:07.513 10:08:37 -- nvmf/common.sh@123 -- # set -e 00:16:07.513 10:08:37 -- nvmf/common.sh@124 -- # return 0 00:16:07.513 10:08:37 -- nvmf/common.sh@477 -- # '[' -n 86141 ']' 00:16:07.513 10:08:37 -- nvmf/common.sh@478 -- # killprocess 86141 00:16:07.513 10:08:37 -- common/autotest_common.sh@926 -- # '[' -z 86141 ']' 00:16:07.513 10:08:37 -- common/autotest_common.sh@930 -- # kill -0 86141 00:16:07.513 10:08:37 -- common/autotest_common.sh@931 -- # uname 00:16:07.513 10:08:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:07.513 10:08:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86141 00:16:07.513 killing process with pid 86141 00:16:07.513 10:08:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:07.513 10:08:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:07.513 10:08:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86141' 00:16:07.513 10:08:37 -- common/autotest_common.sh@945 -- # kill 86141 00:16:07.513 10:08:37 -- common/autotest_common.sh@950 -- # wait 86141 00:16:07.771 10:08:38 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:07.771 10:08:38 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:07.771 10:08:38 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:07.771 10:08:38 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:07.771 10:08:38 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:07.771 10:08:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.771 10:08:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:07.771 10:08:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.771 10:08:38 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:07.771 ************************************ 00:16:07.771 END TEST nvmf_nmic 00:16:07.771 ************************************ 00:16:07.771 00:16:07.771 real 0m5.716s 00:16:07.771 user 0m19.291s 00:16:07.771 sys 0m1.405s 00:16:07.771 10:08:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:07.771 10:08:38 -- common/autotest_common.sh@10 -- # set +x 00:16:07.771 10:08:38 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:07.772 10:08:38 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:07.772 10:08:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:07.772 10:08:38 -- common/autotest_common.sh@10 -- # set +x 00:16:07.772 ************************************ 00:16:07.772 START TEST nvmf_fio_target 00:16:07.772 ************************************ 00:16:07.772 10:08:38 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:16:07.772 * Looking for test storage... 00:16:07.772 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:07.772 10:08:38 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:07.772 10:08:38 -- nvmf/common.sh@7 -- # uname -s 00:16:07.772 10:08:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:07.772 10:08:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:07.772 10:08:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:07.772 10:08:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:07.772 10:08:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:07.772 10:08:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:07.772 10:08:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:07.772 10:08:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:07.772 10:08:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:07.772 10:08:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:07.772 10:08:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:16:07.772 10:08:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:16:07.772 10:08:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:07.772 10:08:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:07.772 10:08:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:07.772 10:08:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:07.772 10:08:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:07.772 10:08:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:07.772 10:08:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:07.772 10:08:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.772 10:08:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.772 10:08:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.772 10:08:38 -- paths/export.sh@5 -- # export PATH 00:16:07.772 10:08:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:07.772 10:08:38 -- nvmf/common.sh@46 -- # : 0 00:16:07.772 10:08:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:07.772 10:08:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:07.772 10:08:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:07.772 10:08:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:07.772 10:08:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:07.772 10:08:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:07.772 10:08:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:07.772 10:08:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:07.772 10:08:38 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:07.772 10:08:38 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:07.772 10:08:38 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:07.772 10:08:38 -- target/fio.sh@16 -- # nvmftestinit 00:16:07.772 10:08:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:07.772 10:08:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:07.772 10:08:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:07.772 10:08:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:07.772 10:08:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:07.772 10:08:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:07.772 10:08:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:07.772 10:08:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:07.772 10:08:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:07.772 10:08:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:07.772 10:08:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:07.772 10:08:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:07.772 10:08:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:07.772 10:08:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:08.030 10:08:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:08.030 10:08:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:08.030 10:08:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:08.030 10:08:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:08.030 10:08:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:08.030 10:08:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:08.030 10:08:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:08.030 10:08:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:08.030 10:08:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:08.030 10:08:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:08.030 10:08:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:08.030 10:08:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:08.030 10:08:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:08.030 10:08:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:08.030 Cannot find device "nvmf_tgt_br" 00:16:08.030 10:08:38 -- nvmf/common.sh@154 -- # true 00:16:08.030 10:08:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:08.030 Cannot find device "nvmf_tgt_br2" 00:16:08.030 10:08:38 -- nvmf/common.sh@155 -- # true 00:16:08.030 10:08:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:08.030 10:08:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:08.030 Cannot find device "nvmf_tgt_br" 00:16:08.030 10:08:38 -- nvmf/common.sh@157 -- # true 00:16:08.030 10:08:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:08.030 Cannot find device "nvmf_tgt_br2" 00:16:08.030 10:08:38 -- nvmf/common.sh@158 -- # true 00:16:08.030 10:08:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:08.030 10:08:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:08.030 10:08:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:08.030 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:08.030 10:08:38 -- nvmf/common.sh@161 -- # true 00:16:08.030 10:08:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:08.030 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:08.030 10:08:38 -- nvmf/common.sh@162 -- # true 00:16:08.030 10:08:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:08.030 10:08:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:08.030 10:08:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:08.030 10:08:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:08.030 10:08:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:08.030 10:08:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:08.030 10:08:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:08.031 10:08:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:08.031 10:08:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:08.031 10:08:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:08.031 10:08:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:08.031 10:08:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:08.031 10:08:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:08.031 10:08:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:08.031 10:08:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:08.031 10:08:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:08.031 10:08:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:08.031 10:08:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:08.031 10:08:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:08.031 10:08:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:08.031 10:08:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:08.289 10:08:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:08.289 10:08:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:08.289 10:08:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:08.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:08.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:16:08.289 00:16:08.289 --- 10.0.0.2 ping statistics --- 00:16:08.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.289 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:16:08.289 10:08:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:08.289 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:08.289 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:16:08.289 00:16:08.289 --- 10.0.0.3 ping statistics --- 00:16:08.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.289 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:16:08.289 10:08:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:08.289 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:08.289 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:16:08.289 00:16:08.289 --- 10.0.0.1 ping statistics --- 00:16:08.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.289 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:16:08.289 10:08:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:08.289 10:08:38 -- nvmf/common.sh@421 -- # return 0 00:16:08.289 10:08:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:08.289 10:08:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:08.289 10:08:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:08.289 10:08:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:08.289 10:08:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:08.289 10:08:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:08.289 10:08:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:08.289 10:08:38 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:16:08.289 10:08:38 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:08.289 10:08:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:08.289 10:08:38 -- common/autotest_common.sh@10 -- # set +x 00:16:08.289 10:08:38 -- nvmf/common.sh@469 -- # nvmfpid=86425 00:16:08.289 10:08:38 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:08.289 10:08:38 -- nvmf/common.sh@470 -- # waitforlisten 86425 00:16:08.289 10:08:38 -- common/autotest_common.sh@819 -- # '[' -z 86425 ']' 00:16:08.289 10:08:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.289 10:08:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:08.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.289 10:08:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.289 10:08:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:08.289 10:08:38 -- common/autotest_common.sh@10 -- # set +x 00:16:08.289 [2024-07-14 10:08:38.681746] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:08.289 [2024-07-14 10:08:38.682011] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.289 [2024-07-14 10:08:38.815777] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:08.547 [2024-07-14 10:08:38.883359] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:08.547 [2024-07-14 10:08:38.883776] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:08.547 [2024-07-14 10:08:38.883797] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:08.547 [2024-07-14 10:08:38.883806] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:08.547 [2024-07-14 10:08:38.883955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.547 [2024-07-14 10:08:38.884089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:08.547 [2024-07-14 10:08:38.884568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:08.547 [2024-07-14 10:08:38.884644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.482 10:08:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:09.482 10:08:39 -- common/autotest_common.sh@852 -- # return 0 00:16:09.482 10:08:39 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:09.483 10:08:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:09.483 10:08:39 -- common/autotest_common.sh@10 -- # set +x 00:16:09.483 10:08:39 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.483 10:08:39 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:09.483 [2024-07-14 10:08:39.967726] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:09.483 10:08:40 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:10.048 10:08:40 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:16:10.048 10:08:40 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:10.306 10:08:40 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:16:10.306 10:08:40 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:10.306 10:08:40 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:16:10.306 10:08:40 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:10.565 10:08:41 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:16:10.565 10:08:41 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:16:10.822 10:08:41 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:11.080 10:08:41 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:16:11.080 10:08:41 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:11.338 10:08:41 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:16:11.338 10:08:41 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:11.596 10:08:42 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:16:11.596 10:08:42 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:16:11.854 10:08:42 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:12.112 10:08:42 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:12.112 10:08:42 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:12.370 10:08:42 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:16:12.370 10:08:42 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:12.628 10:08:42 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:12.628 [2024-07-14 10:08:43.105584] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:12.628 10:08:43 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:16:12.885 10:08:43 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:16:13.142 10:08:43 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:13.398 10:08:43 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:16:13.398 10:08:43 -- common/autotest_common.sh@1177 -- # local i=0 00:16:13.398 10:08:43 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:16:13.398 10:08:43 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:16:13.398 10:08:43 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:16:13.398 10:08:43 -- common/autotest_common.sh@1184 -- # sleep 2 00:16:15.339 10:08:45 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:16:15.339 10:08:45 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:16:15.339 10:08:45 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:16:15.339 10:08:45 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:16:15.339 10:08:45 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:16:15.339 10:08:45 -- common/autotest_common.sh@1187 -- # return 0 00:16:15.339 10:08:45 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:16:15.339 [global] 00:16:15.339 thread=1 00:16:15.339 invalidate=1 00:16:15.339 rw=write 00:16:15.339 time_based=1 00:16:15.339 runtime=1 00:16:15.339 ioengine=libaio 00:16:15.339 direct=1 00:16:15.339 bs=4096 00:16:15.339 iodepth=1 00:16:15.339 norandommap=0 00:16:15.339 numjobs=1 00:16:15.339 00:16:15.339 verify_dump=1 00:16:15.339 verify_backlog=512 00:16:15.339 verify_state_save=0 00:16:15.339 do_verify=1 00:16:15.339 verify=crc32c-intel 00:16:15.339 [job0] 00:16:15.339 filename=/dev/nvme0n1 00:16:15.339 [job1] 00:16:15.339 filename=/dev/nvme0n2 00:16:15.339 [job2] 00:16:15.339 filename=/dev/nvme0n3 00:16:15.339 [job3] 00:16:15.339 filename=/dev/nvme0n4 00:16:15.339 Could not set queue depth (nvme0n1) 00:16:15.339 Could not set queue depth (nvme0n2) 00:16:15.339 Could not set queue depth (nvme0n3) 00:16:15.339 Could not set queue depth (nvme0n4) 00:16:15.597 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:15.597 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:15.597 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:15.597 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:15.597 fio-3.35 00:16:15.597 Starting 4 threads 00:16:16.971 00:16:16.971 job0: (groupid=0, jobs=1): err= 0: pid=86723: Sun Jul 14 10:08:47 2024 00:16:16.971 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:16:16.971 slat (nsec): min=13344, max=38968, avg=16059.11, stdev=2198.49 00:16:16.971 clat (usec): min=129, max=1620, avg=152.95, stdev=28.41 00:16:16.971 lat (usec): min=144, max=1644, avg=169.01, stdev=28.65 00:16:16.971 clat percentiles (usec): 00:16:16.971 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 145], 00:16:16.971 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 153], 60.00th=[ 155], 00:16:16.971 | 70.00th=[ 157], 80.00th=[ 161], 90.00th=[ 165], 95.00th=[ 169], 00:16:16.971 | 99.00th=[ 178], 99.50th=[ 180], 99.90th=[ 186], 99.95th=[ 375], 00:16:16.971 | 99.99th=[ 1614] 00:16:16.971 write: IOPS=3288, BW=12.8MiB/s (13.5MB/s)(12.9MiB/1001msec); 0 zone resets 00:16:16.971 slat (usec): min=19, max=103, avg=23.26, stdev= 4.10 00:16:16.971 clat (usec): min=94, max=191, avg=119.02, stdev=10.12 00:16:16.971 lat (usec): min=114, max=287, avg=142.28, stdev=11.65 00:16:16.971 clat percentiles (usec): 00:16:16.971 | 1.00th=[ 100], 5.00th=[ 104], 10.00th=[ 106], 20.00th=[ 111], 00:16:16.971 | 30.00th=[ 114], 40.00th=[ 117], 50.00th=[ 119], 60.00th=[ 122], 00:16:16.971 | 70.00th=[ 124], 80.00th=[ 128], 90.00th=[ 133], 95.00th=[ 137], 00:16:16.971 | 99.00th=[ 147], 99.50th=[ 149], 99.90th=[ 159], 99.95th=[ 184], 00:16:16.971 | 99.99th=[ 192] 00:16:16.971 bw ( KiB/s): min=12688, max=12688, per=31.14%, avg=12688.00, stdev= 0.00, samples=1 00:16:16.971 iops : min= 3172, max= 3172, avg=3172.00, stdev= 0.00, samples=1 00:16:16.971 lat (usec) : 100=0.53%, 250=99.43%, 500=0.02% 00:16:16.971 lat (msec) : 2=0.02% 00:16:16.971 cpu : usr=2.50%, sys=9.40%, ctx=6366, majf=0, minf=11 00:16:16.971 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:16.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.971 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.971 issued rwts: total=3072,3292,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:16.971 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:16.971 job1: (groupid=0, jobs=1): err= 0: pid=86724: Sun Jul 14 10:08:47 2024 00:16:16.971 read: IOPS=1724, BW=6897KiB/s (7063kB/s)(6904KiB/1001msec) 00:16:16.971 slat (nsec): min=13991, max=64629, avg=18576.77, stdev=6203.41 00:16:16.971 clat (usec): min=136, max=1080, avg=279.88, stdev=55.78 00:16:16.971 lat (usec): min=152, max=1145, avg=298.46, stdev=60.24 00:16:16.971 clat percentiles (usec): 00:16:16.971 | 1.00th=[ 241], 5.00th=[ 247], 10.00th=[ 249], 20.00th=[ 255], 00:16:16.971 | 30.00th=[ 260], 40.00th=[ 262], 50.00th=[ 265], 60.00th=[ 269], 00:16:16.971 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 322], 95.00th=[ 396], 00:16:16.971 | 99.00th=[ 478], 99.50th=[ 537], 99.90th=[ 955], 99.95th=[ 1074], 00:16:16.971 | 99.99th=[ 1074] 00:16:16.971 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:16.971 slat (usec): min=17, max=122, avg=25.49, stdev= 6.89 00:16:16.971 clat (usec): min=97, max=1241, avg=207.69, stdev=32.17 00:16:16.971 lat (usec): min=119, max=1265, avg=233.17, stdev=32.57 00:16:16.971 clat percentiles (usec): 00:16:16.971 | 1.00th=[ 143], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 194], 00:16:16.971 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 210], 00:16:16.971 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 229], 95.00th=[ 239], 00:16:16.971 | 99.00th=[ 258], 99.50th=[ 289], 99.90th=[ 486], 99.95th=[ 506], 00:16:16.971 | 99.99th=[ 1237] 00:16:16.971 bw ( KiB/s): min= 8192, max= 8192, per=20.11%, avg=8192.00, stdev= 0.00, samples=1 00:16:16.971 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:16.971 lat (usec) : 100=0.03%, 250=58.85%, 500=40.78%, 750=0.24%, 1000=0.05% 00:16:16.971 lat (msec) : 2=0.05% 00:16:16.971 cpu : usr=1.30%, sys=6.60%, ctx=3775, majf=0, minf=9 00:16:16.971 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:16.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.971 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.971 issued rwts: total=1726,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:16.971 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:16.971 job2: (groupid=0, jobs=1): err= 0: pid=86725: Sun Jul 14 10:08:47 2024 00:16:16.971 read: IOPS=1750, BW=7001KiB/s (7169kB/s)(7008KiB/1001msec) 00:16:16.971 slat (nsec): min=9671, max=50268, avg=18358.99, stdev=2914.12 00:16:16.971 clat (usec): min=147, max=968, avg=274.86, stdev=47.31 00:16:16.971 lat (usec): min=164, max=1003, avg=293.22, stdev=47.44 00:16:16.971 clat percentiles (usec): 00:16:16.971 | 1.00th=[ 233], 5.00th=[ 243], 10.00th=[ 247], 20.00th=[ 251], 00:16:16.971 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:16:16.971 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[ 310], 95.00th=[ 359], 00:16:16.971 | 99.00th=[ 461], 99.50th=[ 478], 99.90th=[ 685], 99.95th=[ 971], 00:16:16.971 | 99.99th=[ 971] 00:16:16.971 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:16:16.971 slat (nsec): min=17582, max=91679, avg=26089.29, stdev=4576.21 00:16:16.971 clat (usec): min=108, max=1672, avg=207.46, stdev=39.09 00:16:16.971 lat (usec): min=135, max=1697, avg=233.55, stdev=39.18 00:16:16.971 clat percentiles (usec): 00:16:16.972 | 1.00th=[ 172], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 194], 00:16:16.972 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 210], 00:16:16.972 | 70.00th=[ 215], 80.00th=[ 219], 90.00th=[ 227], 95.00th=[ 233], 00:16:16.972 | 99.00th=[ 260], 99.50th=[ 297], 99.90th=[ 457], 99.95th=[ 660], 00:16:16.972 | 99.99th=[ 1680] 00:16:16.972 bw ( KiB/s): min= 8192, max= 8192, per=20.11%, avg=8192.00, stdev= 0.00, samples=1 00:16:16.972 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:16:16.972 lat (usec) : 250=60.82%, 500=38.97%, 750=0.16%, 1000=0.03% 00:16:16.972 lat (msec) : 2=0.03% 00:16:16.972 cpu : usr=1.90%, sys=6.10%, ctx=3803, majf=0, minf=11 00:16:16.972 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:16.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.972 issued rwts: total=1752,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:16.972 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:16.972 job3: (groupid=0, jobs=1): err= 0: pid=86726: Sun Jul 14 10:08:47 2024 00:16:16.972 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:16:16.972 slat (nsec): min=12539, max=63416, avg=17248.68, stdev=3557.11 00:16:16.972 clat (usec): min=145, max=7306, avg=182.76, stdev=163.74 00:16:16.972 lat (usec): min=163, max=7322, avg=200.01, stdev=163.75 00:16:16.972 clat percentiles (usec): 00:16:16.972 | 1.00th=[ 153], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 163], 00:16:16.972 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 176], 00:16:16.972 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 188], 95.00th=[ 196], 00:16:16.972 | 99.00th=[ 429], 99.50th=[ 486], 99.90th=[ 1500], 99.95th=[ 3556], 00:16:16.972 | 99.99th=[ 7308] 00:16:16.972 write: IOPS=2804, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1001msec); 0 zone resets 00:16:16.972 slat (nsec): min=12254, max=66345, avg=24028.34, stdev=3860.61 00:16:16.972 clat (usec): min=106, max=589, avg=145.91, stdev=41.81 00:16:16.972 lat (usec): min=128, max=607, avg=169.94, stdev=41.20 00:16:16.972 clat percentiles (usec): 00:16:16.972 | 1.00th=[ 115], 5.00th=[ 120], 10.00th=[ 123], 20.00th=[ 127], 00:16:16.972 | 30.00th=[ 130], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 139], 00:16:16.972 | 70.00th=[ 143], 80.00th=[ 149], 90.00th=[ 161], 95.00th=[ 245], 00:16:16.972 | 99.00th=[ 330], 99.50th=[ 343], 99.90th=[ 478], 99.95th=[ 545], 00:16:16.972 | 99.99th=[ 586] 00:16:16.972 bw ( KiB/s): min=12288, max=12288, per=30.16%, avg=12288.00, stdev= 0.00, samples=1 00:16:16.972 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:16.972 lat (usec) : 250=96.31%, 500=3.48%, 750=0.11%, 1000=0.02% 00:16:16.972 lat (msec) : 2=0.04%, 4=0.02%, 10=0.02% 00:16:16.972 cpu : usr=2.30%, sys=8.40%, ctx=5368, majf=0, minf=4 00:16:16.972 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:16.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:16.972 issued rwts: total=2560,2807,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:16.972 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:16.972 00:16:16.972 Run status group 0 (all jobs): 00:16:16.972 READ: bw=35.5MiB/s (37.3MB/s), 6897KiB/s-12.0MiB/s (7063kB/s-12.6MB/s), io=35.6MiB (37.3MB), run=1001-1001msec 00:16:16.972 WRITE: bw=39.8MiB/s (41.7MB/s), 8184KiB/s-12.8MiB/s (8380kB/s-13.5MB/s), io=39.8MiB (41.8MB), run=1001-1001msec 00:16:16.972 00:16:16.972 Disk stats (read/write): 00:16:16.972 nvme0n1: ios=2609/2867, merge=0/0, ticks=415/364, in_queue=779, util=86.95% 00:16:16.972 nvme0n2: ios=1563/1784, merge=0/0, ticks=479/389, in_queue=868, util=92.55% 00:16:16.972 nvme0n3: ios=1569/1785, merge=0/0, ticks=488/397, in_queue=885, util=92.19% 00:16:16.972 nvme0n4: ios=2251/2560, merge=0/0, ticks=402/364, in_queue=766, util=89.41% 00:16:16.972 10:08:47 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:16:16.972 [global] 00:16:16.972 thread=1 00:16:16.972 invalidate=1 00:16:16.972 rw=randwrite 00:16:16.972 time_based=1 00:16:16.972 runtime=1 00:16:16.972 ioengine=libaio 00:16:16.972 direct=1 00:16:16.972 bs=4096 00:16:16.972 iodepth=1 00:16:16.972 norandommap=0 00:16:16.972 numjobs=1 00:16:16.972 00:16:16.972 verify_dump=1 00:16:16.972 verify_backlog=512 00:16:16.972 verify_state_save=0 00:16:16.972 do_verify=1 00:16:16.972 verify=crc32c-intel 00:16:16.972 [job0] 00:16:16.972 filename=/dev/nvme0n1 00:16:16.972 [job1] 00:16:16.972 filename=/dev/nvme0n2 00:16:16.972 [job2] 00:16:16.972 filename=/dev/nvme0n3 00:16:16.972 [job3] 00:16:16.972 filename=/dev/nvme0n4 00:16:16.972 Could not set queue depth (nvme0n1) 00:16:16.972 Could not set queue depth (nvme0n2) 00:16:16.972 Could not set queue depth (nvme0n3) 00:16:16.972 Could not set queue depth (nvme0n4) 00:16:16.972 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:16.972 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:16.972 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:16.972 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:16.972 fio-3.35 00:16:16.972 Starting 4 threads 00:16:18.345 00:16:18.345 job0: (groupid=0, jobs=1): err= 0: pid=86779: Sun Jul 14 10:08:48 2024 00:16:18.345 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:16:18.345 slat (nsec): min=11234, max=47725, avg=14671.71, stdev=2510.57 00:16:18.345 clat (usec): min=126, max=1813, avg=177.24, stdev=56.60 00:16:18.345 lat (usec): min=141, max=1828, avg=191.91, stdev=56.45 00:16:18.345 clat percentiles (usec): 00:16:18.345 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:16:18.345 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 161], 00:16:18.345 | 70.00th=[ 167], 80.00th=[ 227], 90.00th=[ 258], 95.00th=[ 269], 00:16:18.345 | 99.00th=[ 310], 99.50th=[ 330], 99.90th=[ 363], 99.95th=[ 857], 00:16:18.345 | 99.99th=[ 1811] 00:16:18.345 write: IOPS=2879, BW=11.2MiB/s (11.8MB/s)(11.3MiB/1001msec); 0 zone resets 00:16:18.345 slat (nsec): min=10832, max=88443, avg=20743.83, stdev=4024.18 00:16:18.345 clat (usec): min=98, max=315, avg=152.30, stdev=46.57 00:16:18.345 lat (usec): min=119, max=337, avg=173.05, stdev=45.68 00:16:18.345 clat percentiles (usec): 00:16:18.345 | 1.00th=[ 105], 5.00th=[ 112], 10.00th=[ 116], 20.00th=[ 120], 00:16:18.345 | 30.00th=[ 124], 40.00th=[ 127], 50.00th=[ 130], 60.00th=[ 135], 00:16:18.345 | 70.00th=[ 145], 80.00th=[ 206], 90.00th=[ 233], 95.00th=[ 247], 00:16:18.345 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 289], 99.95th=[ 314], 00:16:18.345 | 99.99th=[ 314] 00:16:18.345 bw ( KiB/s): min=12688, max=12688, per=27.17%, avg=12688.00, stdev= 0.00, samples=1 00:16:18.345 iops : min= 3172, max= 3172, avg=3172.00, stdev= 0.00, samples=1 00:16:18.345 lat (usec) : 100=0.02%, 250=91.77%, 500=8.18%, 1000=0.02% 00:16:18.346 lat (msec) : 2=0.02% 00:16:18.346 cpu : usr=2.00%, sys=7.40%, ctx=5442, majf=0, minf=17 00:16:18.346 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:18.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.346 issued rwts: total=2560,2882,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.346 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:18.346 job1: (groupid=0, jobs=1): err= 0: pid=86780: Sun Jul 14 10:08:48 2024 00:16:18.346 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:16:18.346 slat (nsec): min=10318, max=73810, avg=15712.14, stdev=2800.52 00:16:18.346 clat (usec): min=130, max=3223, avg=188.01, stdev=76.94 00:16:18.346 lat (usec): min=148, max=3239, avg=203.72, stdev=76.89 00:16:18.346 clat percentiles (usec): 00:16:18.346 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:16:18.346 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 169], 00:16:18.346 | 70.00th=[ 182], 80.00th=[ 241], 90.00th=[ 260], 95.00th=[ 273], 00:16:18.346 | 99.00th=[ 338], 99.50th=[ 351], 99.90th=[ 545], 99.95th=[ 627], 00:16:18.346 | 99.99th=[ 3228] 00:16:18.346 write: IOPS=2656, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1001msec); 0 zone resets 00:16:18.346 slat (nsec): min=11088, max=88495, avg=21798.94, stdev=4223.81 00:16:18.346 clat (usec): min=101, max=7396, avg=154.32, stdev=157.06 00:16:18.346 lat (usec): min=123, max=7417, avg=176.12, stdev=156.94 00:16:18.346 clat percentiles (usec): 00:16:18.346 | 1.00th=[ 108], 5.00th=[ 114], 10.00th=[ 118], 20.00th=[ 123], 00:16:18.346 | 30.00th=[ 126], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 137], 00:16:18.346 | 70.00th=[ 143], 80.00th=[ 194], 90.00th=[ 223], 95.00th=[ 239], 00:16:18.346 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 1647], 99.95th=[ 2376], 00:16:18.346 | 99.99th=[ 7373] 00:16:18.346 bw ( KiB/s): min=12288, max=12288, per=26.32%, avg=12288.00, stdev= 0.00, samples=1 00:16:18.346 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:18.346 lat (usec) : 250=91.38%, 500=8.47%, 750=0.04%, 1000=0.02% 00:16:18.346 lat (msec) : 2=0.04%, 4=0.04%, 10=0.02% 00:16:18.346 cpu : usr=2.00%, sys=7.40%, ctx=5220, majf=0, minf=8 00:16:18.346 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:18.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.346 issued rwts: total=2560,2659,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.346 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:18.346 job2: (groupid=0, jobs=1): err= 0: pid=86781: Sun Jul 14 10:08:48 2024 00:16:18.346 read: IOPS=2590, BW=10.1MiB/s (10.6MB/s)(10.1MiB/1001msec) 00:16:18.346 slat (nsec): min=18094, max=72611, avg=21929.35, stdev=3453.06 00:16:18.346 clat (usec): min=135, max=356, avg=162.02, stdev=14.21 00:16:18.346 lat (usec): min=154, max=376, avg=183.95, stdev=14.58 00:16:18.346 clat percentiles (usec): 00:16:18.346 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 151], 00:16:18.346 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 163], 00:16:18.346 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 180], 95.00th=[ 186], 00:16:18.346 | 99.00th=[ 202], 99.50th=[ 233], 99.90th=[ 269], 99.95th=[ 281], 00:16:18.346 | 99.99th=[ 359] 00:16:18.346 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:16:18.346 slat (nsec): min=23529, max=73187, avg=30927.53, stdev=5187.40 00:16:18.346 clat (usec): min=104, max=395, avg=135.09, stdev=15.64 00:16:18.346 lat (usec): min=131, max=439, avg=166.02, stdev=16.86 00:16:18.346 clat percentiles (usec): 00:16:18.346 | 1.00th=[ 113], 5.00th=[ 117], 10.00th=[ 120], 20.00th=[ 125], 00:16:18.346 | 30.00th=[ 128], 40.00th=[ 131], 50.00th=[ 135], 60.00th=[ 137], 00:16:18.346 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 157], 00:16:18.346 | 99.00th=[ 174], 99.50th=[ 182], 99.90th=[ 297], 99.95th=[ 379], 00:16:18.346 | 99.99th=[ 396] 00:16:18.346 bw ( KiB/s): min=12288, max=12288, per=26.32%, avg=12288.00, stdev= 0.00, samples=1 00:16:18.346 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:18.346 lat (usec) : 250=99.70%, 500=0.30% 00:16:18.346 cpu : usr=3.30%, sys=10.60%, ctx=5668, majf=0, minf=11 00:16:18.346 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:18.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.346 issued rwts: total=2593,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.346 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:18.346 job3: (groupid=0, jobs=1): err= 0: pid=86782: Sun Jul 14 10:08:48 2024 00:16:18.346 read: IOPS=2784, BW=10.9MiB/s (11.4MB/s)(10.9MiB/1001msec) 00:16:18.346 slat (nsec): min=13289, max=45282, avg=15962.30, stdev=2588.06 00:16:18.346 clat (usec): min=137, max=313, avg=163.41, stdev=11.55 00:16:18.346 lat (usec): min=151, max=330, avg=179.38, stdev=11.70 00:16:18.346 clat percentiles (usec): 00:16:18.346 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 151], 20.00th=[ 155], 00:16:18.346 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 165], 00:16:18.346 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 178], 95.00th=[ 186], 00:16:18.346 | 99.00th=[ 196], 99.50th=[ 200], 99.90th=[ 212], 99.95th=[ 233], 00:16:18.346 | 99.99th=[ 314] 00:16:18.346 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:16:18.346 slat (nsec): min=18869, max=88005, avg=23547.14, stdev=4919.72 00:16:18.346 clat (usec): min=98, max=755, avg=135.60, stdev=21.15 00:16:18.346 lat (usec): min=123, max=778, avg=159.14, stdev=21.81 00:16:18.346 clat percentiles (usec): 00:16:18.346 | 1.00th=[ 111], 5.00th=[ 117], 10.00th=[ 121], 20.00th=[ 125], 00:16:18.346 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 135], 60.00th=[ 137], 00:16:18.346 | 70.00th=[ 141], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 157], 00:16:18.346 | 99.00th=[ 169], 99.50th=[ 176], 99.90th=[ 469], 99.95th=[ 611], 00:16:18.346 | 99.99th=[ 758] 00:16:18.346 bw ( KiB/s): min=12288, max=12288, per=26.32%, avg=12288.00, stdev= 0.00, samples=1 00:16:18.346 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:16:18.346 lat (usec) : 100=0.02%, 250=99.88%, 500=0.07%, 750=0.02%, 1000=0.02% 00:16:18.346 cpu : usr=2.50%, sys=8.40%, ctx=5859, majf=0, minf=9 00:16:18.346 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:18.346 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.346 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:18.346 issued rwts: total=2787,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:18.346 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:18.346 00:16:18.346 Run status group 0 (all jobs): 00:16:18.346 READ: bw=41.0MiB/s (43.0MB/s), 9.99MiB/s-10.9MiB/s (10.5MB/s-11.4MB/s), io=41.0MiB (43.0MB), run=1001-1001msec 00:16:18.346 WRITE: bw=45.6MiB/s (47.8MB/s), 10.4MiB/s-12.0MiB/s (10.9MB/s-12.6MB/s), io=45.6MiB (47.9MB), run=1001-1001msec 00:16:18.346 00:16:18.346 Disk stats (read/write): 00:16:18.346 nvme0n1: ios=2392/2560, merge=0/0, ticks=451/390, in_queue=841, util=88.38% 00:16:18.346 nvme0n2: ios=2193/2560, merge=0/0, ticks=409/408, in_queue=817, util=88.06% 00:16:18.346 nvme0n3: ios=2296/2560, merge=0/0, ticks=392/378, in_queue=770, util=89.25% 00:16:18.346 nvme0n4: ios=2469/2560, merge=0/0, ticks=419/378, in_queue=797, util=89.71% 00:16:18.346 10:08:48 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:16:18.346 [global] 00:16:18.346 thread=1 00:16:18.346 invalidate=1 00:16:18.346 rw=write 00:16:18.346 time_based=1 00:16:18.346 runtime=1 00:16:18.346 ioengine=libaio 00:16:18.346 direct=1 00:16:18.346 bs=4096 00:16:18.346 iodepth=128 00:16:18.346 norandommap=0 00:16:18.346 numjobs=1 00:16:18.346 00:16:18.346 verify_dump=1 00:16:18.346 verify_backlog=512 00:16:18.346 verify_state_save=0 00:16:18.346 do_verify=1 00:16:18.346 verify=crc32c-intel 00:16:18.346 [job0] 00:16:18.346 filename=/dev/nvme0n1 00:16:18.346 [job1] 00:16:18.346 filename=/dev/nvme0n2 00:16:18.346 [job2] 00:16:18.346 filename=/dev/nvme0n3 00:16:18.346 [job3] 00:16:18.346 filename=/dev/nvme0n4 00:16:18.346 Could not set queue depth (nvme0n1) 00:16:18.346 Could not set queue depth (nvme0n2) 00:16:18.346 Could not set queue depth (nvme0n3) 00:16:18.346 Could not set queue depth (nvme0n4) 00:16:18.346 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:18.346 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:18.346 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:18.346 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:18.346 fio-3.35 00:16:18.346 Starting 4 threads 00:16:19.789 00:16:19.789 job0: (groupid=0, jobs=1): err= 0: pid=86836: Sun Jul 14 10:08:49 2024 00:16:19.789 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:16:19.789 slat (usec): min=8, max=2523, avg=82.42, stdev=349.70 00:16:19.789 clat (usec): min=8326, max=13720, avg=11074.59, stdev=843.05 00:16:19.789 lat (usec): min=8719, max=13906, avg=11157.02, stdev=794.34 00:16:19.789 clat percentiles (usec): 00:16:19.789 | 1.00th=[ 8979], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10552], 00:16:19.789 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:16:19.789 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12387], 00:16:19.789 | 99.00th=[13042], 99.50th=[13173], 99.90th=[13435], 99.95th=[13698], 00:16:19.789 | 99.99th=[13698] 00:16:19.789 write: IOPS=5766, BW=22.5MiB/s (23.6MB/s)(22.6MiB/1002msec); 0 zone resets 00:16:19.789 slat (usec): min=9, max=2675, avg=85.49, stdev=331.36 00:16:19.789 clat (usec): min=201, max=13849, avg=11118.24, stdev=1276.84 00:16:19.789 lat (usec): min=2138, max=13867, avg=11203.73, stdev=1261.58 00:16:19.789 clat percentiles (usec): 00:16:19.789 | 1.00th=[ 6128], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[10159], 00:16:19.789 | 30.00th=[10683], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:16:19.789 | 70.00th=[11863], 80.00th=[11994], 90.00th=[12256], 95.00th=[12649], 00:16:19.789 | 99.00th=[13173], 99.50th=[13304], 99.90th=[13829], 99.95th=[13829], 00:16:19.789 | 99.99th=[13829] 00:16:19.789 bw ( KiB/s): min=21224, max=23976, per=33.98%, avg=22600.00, stdev=1945.96, samples=2 00:16:19.789 iops : min= 5306, max= 5994, avg=5650.00, stdev=486.49, samples=2 00:16:19.789 lat (usec) : 250=0.01% 00:16:19.789 lat (msec) : 4=0.28%, 10=14.59%, 20=85.12% 00:16:19.789 cpu : usr=4.40%, sys=15.88%, ctx=880, majf=0, minf=1 00:16:19.789 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:16:19.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:19.789 issued rwts: total=5632,5778,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:19.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:19.789 job1: (groupid=0, jobs=1): err= 0: pid=86837: Sun Jul 14 10:08:49 2024 00:16:19.789 read: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec) 00:16:19.789 slat (usec): min=9, max=8814, avg=277.74, stdev=1161.91 00:16:19.789 clat (usec): min=15132, max=61316, avg=34065.36, stdev=13759.62 00:16:19.789 lat (usec): min=16853, max=61354, avg=34343.10, stdev=13824.92 00:16:19.789 clat percentiles (usec): 00:16:19.789 | 1.00th=[17433], 5.00th=[18744], 10.00th=[20579], 20.00th=[23462], 00:16:19.789 | 30.00th=[23725], 40.00th=[23987], 50.00th=[24773], 60.00th=[34341], 00:16:19.789 | 70.00th=[41157], 80.00th=[51643], 90.00th=[55837], 95.00th=[58459], 00:16:19.789 | 99.00th=[61080], 99.50th=[61080], 99.90th=[61080], 99.95th=[61080], 00:16:19.789 | 99.99th=[61080] 00:16:19.789 write: IOPS=2201, BW=8805KiB/s (9016kB/s)(8840KiB/1004msec); 0 zone resets 00:16:19.789 slat (usec): min=16, max=9117, avg=185.20, stdev=914.67 00:16:19.789 clat (usec): min=235, max=52612, avg=25293.89, stdev=8529.93 00:16:19.789 lat (usec): min=6554, max=52661, avg=25479.10, stdev=8523.08 00:16:19.789 clat percentiles (usec): 00:16:19.789 | 1.00th=[ 7504], 5.00th=[16057], 10.00th=[16188], 20.00th=[16450], 00:16:19.789 | 30.00th=[17433], 40.00th=[21365], 50.00th=[24773], 60.00th=[28443], 00:16:19.789 | 70.00th=[29492], 80.00th=[34866], 90.00th=[35390], 95.00th=[38536], 00:16:19.789 | 99.00th=[47973], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:16:19.789 | 99.99th=[52691] 00:16:19.789 bw ( KiB/s): min= 8208, max= 8464, per=12.53%, avg=8336.00, stdev=181.02, samples=2 00:16:19.789 iops : min= 2052, max= 2116, avg=2084.00, stdev=45.25, samples=2 00:16:19.789 lat (usec) : 250=0.02% 00:16:19.789 lat (msec) : 10=0.99%, 20=21.77%, 50=66.51%, 100=10.71% 00:16:19.789 cpu : usr=1.99%, sys=9.77%, ctx=195, majf=0, minf=10 00:16:19.789 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:16:19.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:19.789 issued rwts: total=2048,2210,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:19.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:19.789 job2: (groupid=0, jobs=1): err= 0: pid=86838: Sun Jul 14 10:08:49 2024 00:16:19.789 read: IOPS=3171, BW=12.4MiB/s (13.0MB/s)(12.4MiB/1004msec) 00:16:19.789 slat (usec): min=3, max=8453, avg=140.53, stdev=703.99 00:16:19.789 clat (usec): min=1217, max=33089, avg=16959.42, stdev=3568.18 00:16:19.789 lat (usec): min=5033, max=33107, avg=17099.95, stdev=3624.24 00:16:19.789 clat percentiles (usec): 00:16:19.789 | 1.00th=[ 9372], 5.00th=[13042], 10.00th=[13566], 20.00th=[14353], 00:16:19.789 | 30.00th=[15139], 40.00th=[15795], 50.00th=[16319], 60.00th=[16909], 00:16:19.789 | 70.00th=[17957], 80.00th=[19268], 90.00th=[21365], 95.00th=[23200], 00:16:19.789 | 99.00th=[29492], 99.50th=[32375], 99.90th=[33162], 99.95th=[33162], 00:16:19.789 | 99.99th=[33162] 00:16:19.789 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:16:19.789 slat (usec): min=14, max=9914, avg=146.11, stdev=661.62 00:16:19.789 clat (usec): min=9458, max=42814, avg=20281.56, stdev=6343.38 00:16:19.789 lat (usec): min=9482, max=42843, avg=20427.67, stdev=6392.99 00:16:19.789 clat percentiles (usec): 00:16:19.789 | 1.00th=[12387], 5.00th=[13698], 10.00th=[14091], 20.00th=[14746], 00:16:19.789 | 30.00th=[15270], 40.00th=[16909], 50.00th=[17695], 60.00th=[20579], 00:16:19.789 | 70.00th=[23725], 80.00th=[24773], 90.00th=[30540], 95.00th=[33424], 00:16:19.789 | 99.00th=[36963], 99.50th=[39060], 99.90th=[42730], 99.95th=[42730], 00:16:19.789 | 99.99th=[42730] 00:16:19.789 bw ( KiB/s): min=14012, max=14560, per=21.48%, avg=14286.00, stdev=387.49, samples=2 00:16:19.789 iops : min= 3503, max= 3640, avg=3571.50, stdev=96.87, samples=2 00:16:19.789 lat (msec) : 2=0.01%, 10=0.75%, 20=68.93%, 50=30.30% 00:16:19.789 cpu : usr=3.19%, sys=11.47%, ctx=375, majf=0, minf=1 00:16:19.789 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:16:19.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:19.789 issued rwts: total=3184,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:19.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:19.789 job3: (groupid=0, jobs=1): err= 0: pid=86839: Sun Jul 14 10:08:49 2024 00:16:19.789 read: IOPS=4724, BW=18.5MiB/s (19.4MB/s)(18.5MiB/1002msec) 00:16:19.789 slat (usec): min=6, max=4185, avg=96.26, stdev=426.45 00:16:19.789 clat (usec): min=383, max=15858, avg=12758.85, stdev=1404.12 00:16:19.789 lat (usec): min=1319, max=16628, avg=12855.12, stdev=1352.38 00:16:19.789 clat percentiles (usec): 00:16:19.789 | 1.00th=[ 5080], 5.00th=[10683], 10.00th=[11076], 20.00th=[12256], 00:16:19.789 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13042], 60.00th=[13173], 00:16:19.789 | 70.00th=[13304], 80.00th=[13566], 90.00th=[13960], 95.00th=[14091], 00:16:19.789 | 99.00th=[15795], 99.50th=[15795], 99.90th=[15795], 99.95th=[15795], 00:16:19.789 | 99.99th=[15795] 00:16:19.789 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:16:19.789 slat (usec): min=11, max=4147, avg=98.76, stdev=413.23 00:16:19.789 clat (usec): min=9081, max=15709, avg=12905.57, stdev=1213.00 00:16:19.789 lat (usec): min=9106, max=15733, avg=13004.33, stdev=1183.92 00:16:19.789 clat percentiles (usec): 00:16:19.789 | 1.00th=[ 9765], 5.00th=[10683], 10.00th=[11076], 20.00th=[11600], 00:16:19.789 | 30.00th=[12518], 40.00th=[13042], 50.00th=[13173], 60.00th=[13435], 00:16:19.789 | 70.00th=[13698], 80.00th=[13829], 90.00th=[14222], 95.00th=[14615], 00:16:19.789 | 99.00th=[15139], 99.50th=[15270], 99.90th=[15664], 99.95th=[15664], 00:16:19.789 | 99.99th=[15664] 00:16:19.789 bw ( KiB/s): min=20464, max=20521, per=30.81%, avg=20492.50, stdev=40.31, samples=2 00:16:19.789 iops : min= 5116, max= 5130, avg=5123.00, stdev= 9.90, samples=2 00:16:19.789 lat (usec) : 500=0.01% 00:16:19.789 lat (msec) : 2=0.08%, 10=1.78%, 20=98.13% 00:16:19.789 cpu : usr=4.70%, sys=13.99%, ctx=691, majf=0, minf=1 00:16:19.789 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:16:19.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:19.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:19.789 issued rwts: total=4734,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:19.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:19.789 00:16:19.789 Run status group 0 (all jobs): 00:16:19.789 READ: bw=60.7MiB/s (63.6MB/s), 8159KiB/s-22.0MiB/s (8355kB/s-23.0MB/s), io=60.9MiB (63.9MB), run=1002-1004msec 00:16:19.789 WRITE: bw=64.9MiB/s (68.1MB/s), 8805KiB/s-22.5MiB/s (9016kB/s-23.6MB/s), io=65.2MiB (68.4MB), run=1002-1004msec 00:16:19.789 00:16:19.789 Disk stats (read/write): 00:16:19.789 nvme0n1: ios=4785/5120, merge=0/0, ticks=12335/12144, in_queue=24479, util=88.70% 00:16:19.789 nvme0n2: ios=1841/2048, merge=0/0, ticks=14764/10647, in_queue=25411, util=89.40% 00:16:19.789 nvme0n3: ios=2566/3049, merge=0/0, ticks=21865/28935, in_queue=50800, util=89.23% 00:16:19.789 nvme0n4: ios=4096/4407, merge=0/0, ticks=12311/11895, in_queue=24206, util=89.58% 00:16:19.789 10:08:49 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:16:19.789 [global] 00:16:19.789 thread=1 00:16:19.789 invalidate=1 00:16:19.789 rw=randwrite 00:16:19.789 time_based=1 00:16:19.789 runtime=1 00:16:19.789 ioengine=libaio 00:16:19.789 direct=1 00:16:19.789 bs=4096 00:16:19.789 iodepth=128 00:16:19.789 norandommap=0 00:16:19.789 numjobs=1 00:16:19.789 00:16:19.789 verify_dump=1 00:16:19.789 verify_backlog=512 00:16:19.789 verify_state_save=0 00:16:19.789 do_verify=1 00:16:19.789 verify=crc32c-intel 00:16:19.789 [job0] 00:16:19.789 filename=/dev/nvme0n1 00:16:19.789 [job1] 00:16:19.789 filename=/dev/nvme0n2 00:16:19.789 [job2] 00:16:19.789 filename=/dev/nvme0n3 00:16:19.789 [job3] 00:16:19.789 filename=/dev/nvme0n4 00:16:19.789 Could not set queue depth (nvme0n1) 00:16:19.789 Could not set queue depth (nvme0n2) 00:16:19.789 Could not set queue depth (nvme0n3) 00:16:19.789 Could not set queue depth (nvme0n4) 00:16:19.789 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:19.789 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:19.789 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:19.789 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:19.789 fio-3.35 00:16:19.789 Starting 4 threads 00:16:21.163 00:16:21.163 job0: (groupid=0, jobs=1): err= 0: pid=86898: Sun Jul 14 10:08:51 2024 00:16:21.163 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:16:21.163 slat (usec): min=3, max=5256, avg=149.55, stdev=660.07 00:16:21.163 clat (usec): min=12619, max=27936, avg=19119.56, stdev=1830.82 00:16:21.163 lat (usec): min=13656, max=27955, avg=19269.11, stdev=1768.38 00:16:21.163 clat percentiles (usec): 00:16:21.163 | 1.00th=[14746], 5.00th=[16057], 10.00th=[16712], 20.00th=[17433], 00:16:21.163 | 30.00th=[17957], 40.00th=[19006], 50.00th=[19530], 60.00th=[19792], 00:16:21.163 | 70.00th=[20055], 80.00th=[20579], 90.00th=[21103], 95.00th=[21627], 00:16:21.163 | 99.00th=[24249], 99.50th=[24773], 99.90th=[25560], 99.95th=[27395], 00:16:21.163 | 99.99th=[27919] 00:16:21.163 write: IOPS=3440, BW=13.4MiB/s (14.1MB/s)(13.5MiB/1001msec); 0 zone resets 00:16:21.163 slat (usec): min=3, max=4698, avg=150.64, stdev=563.44 00:16:21.163 clat (usec): min=491, max=29211, avg=19525.96, stdev=3067.58 00:16:21.163 lat (usec): min=521, max=29236, avg=19676.60, stdev=3045.09 00:16:21.163 clat percentiles (usec): 00:16:21.163 | 1.00th=[ 5473], 5.00th=[15270], 10.00th=[16057], 20.00th=[17171], 00:16:21.163 | 30.00th=[19268], 40.00th=[19792], 50.00th=[20317], 60.00th=[20579], 00:16:21.163 | 70.00th=[20841], 80.00th=[21103], 90.00th=[21627], 95.00th=[23725], 00:16:21.163 | 99.00th=[27395], 99.50th=[27919], 99.90th=[28705], 99.95th=[28967], 00:16:21.163 | 99.99th=[29230] 00:16:21.164 bw ( KiB/s): min=13024, max=13024, per=24.30%, avg=13024.00, stdev= 0.00, samples=1 00:16:21.164 iops : min= 3256, max= 3256, avg=3256.00, stdev= 0.00, samples=1 00:16:21.164 lat (usec) : 500=0.02%, 750=0.06% 00:16:21.164 lat (msec) : 10=1.04%, 20=53.42%, 50=45.46% 00:16:21.164 cpu : usr=2.20%, sys=8.90%, ctx=911, majf=0, minf=9 00:16:21.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:16:21.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:21.164 issued rwts: total=3072,3444,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:21.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:21.164 job1: (groupid=0, jobs=1): err= 0: pid=86899: Sun Jul 14 10:08:51 2024 00:16:21.164 read: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec) 00:16:21.164 slat (usec): min=8, max=7523, avg=138.98, stdev=640.09 00:16:21.164 clat (usec): min=5234, max=30299, avg=17826.56, stdev=3872.43 00:16:21.164 lat (usec): min=5279, max=30339, avg=17965.54, stdev=3910.75 00:16:21.164 clat percentiles (usec): 00:16:21.164 | 1.00th=[ 7111], 5.00th=[13042], 10.00th=[13829], 20.00th=[14877], 00:16:21.164 | 30.00th=[15533], 40.00th=[16319], 50.00th=[16909], 60.00th=[17695], 00:16:21.164 | 70.00th=[19006], 80.00th=[21890], 90.00th=[24511], 95.00th=[25035], 00:16:21.164 | 99.00th=[26346], 99.50th=[26346], 99.90th=[28705], 99.95th=[28967], 00:16:21.164 | 99.99th=[30278] 00:16:21.164 write: IOPS=3591, BW=14.0MiB/s (14.7MB/s)(14.1MiB/1002msec); 0 zone resets 00:16:21.164 slat (usec): min=12, max=4544, avg=130.95, stdev=479.04 00:16:21.164 clat (usec): min=910, max=26809, avg=17417.33, stdev=3482.44 00:16:21.164 lat (usec): min=4836, max=26839, avg=17548.28, stdev=3501.87 00:16:21.164 clat percentiles (usec): 00:16:21.164 | 1.00th=[11338], 5.00th=[11863], 10.00th=[12518], 20.00th=[13173], 00:16:21.164 | 30.00th=[15533], 40.00th=[17171], 50.00th=[19006], 60.00th=[19268], 00:16:21.164 | 70.00th=[19530], 80.00th=[20055], 90.00th=[20841], 95.00th=[21890], 00:16:21.164 | 99.00th=[25560], 99.50th=[26346], 99.90th=[26870], 99.95th=[26870], 00:16:21.164 | 99.99th=[26870] 00:16:21.164 bw ( KiB/s): min=12632, max=16040, per=26.75%, avg=14336.00, stdev=2409.82, samples=2 00:16:21.164 iops : min= 3158, max= 4010, avg=3584.00, stdev=602.45, samples=2 00:16:21.164 lat (usec) : 1000=0.01% 00:16:21.164 lat (msec) : 10=0.75%, 20=76.72%, 50=22.51% 00:16:21.164 cpu : usr=3.40%, sys=11.49%, ctx=480, majf=0, minf=13 00:16:21.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:16:21.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:21.164 issued rwts: total=3584,3599,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:21.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:21.164 job2: (groupid=0, jobs=1): err= 0: pid=86900: Sun Jul 14 10:08:51 2024 00:16:21.164 read: IOPS=2999, BW=11.7MiB/s (12.3MB/s)(11.8MiB/1003msec) 00:16:21.164 slat (usec): min=12, max=5616, avg=174.56, stdev=723.77 00:16:21.164 clat (usec): min=1308, max=35879, avg=22631.51, stdev=4609.91 00:16:21.164 lat (usec): min=5875, max=35916, avg=22806.07, stdev=4595.04 00:16:21.164 clat percentiles (usec): 00:16:21.164 | 1.00th=[ 6783], 5.00th=[18482], 10.00th=[19268], 20.00th=[19268], 00:16:21.164 | 30.00th=[19792], 40.00th=[20841], 50.00th=[21890], 60.00th=[22676], 00:16:21.164 | 70.00th=[23987], 80.00th=[25035], 90.00th=[29754], 95.00th=[32637], 00:16:21.164 | 99.00th=[34866], 99.50th=[34866], 99.90th=[35914], 99.95th=[35914], 00:16:21.164 | 99.99th=[35914] 00:16:21.164 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:16:21.164 slat (usec): min=20, max=5602, avg=144.15, stdev=530.30 00:16:21.164 clat (usec): min=11752, max=28214, avg=18943.35, stdev=3142.90 00:16:21.164 lat (usec): min=14232, max=28283, avg=19087.50, stdev=3126.97 00:16:21.164 clat percentiles (usec): 00:16:21.164 | 1.00th=[13960], 5.00th=[15008], 10.00th=[15533], 20.00th=[16188], 00:16:21.164 | 30.00th=[16450], 40.00th=[16909], 50.00th=[18744], 60.00th=[19792], 00:16:21.164 | 70.00th=[20579], 80.00th=[22152], 90.00th=[22676], 95.00th=[25035], 00:16:21.164 | 99.00th=[27395], 99.50th=[27919], 99.90th=[28181], 99.95th=[28181], 00:16:21.164 | 99.99th=[28181] 00:16:21.164 bw ( KiB/s): min=12288, max=12288, per=22.93%, avg=12288.00, stdev= 0.00, samples=2 00:16:21.164 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:16:21.164 lat (msec) : 2=0.02%, 10=0.53%, 20=48.90%, 50=50.56% 00:16:21.164 cpu : usr=4.59%, sys=11.68%, ctx=340, majf=0, minf=11 00:16:21.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:16:21.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:21.164 issued rwts: total=3008,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:21.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:21.164 job3: (groupid=0, jobs=1): err= 0: pid=86901: Sun Jul 14 10:08:51 2024 00:16:21.164 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:16:21.164 slat (usec): min=2, max=4927, avg=153.62, stdev=649.69 00:16:21.164 clat (usec): min=14614, max=23988, avg=19733.88, stdev=1727.33 00:16:21.164 lat (usec): min=15174, max=27162, avg=19887.50, stdev=1637.09 00:16:21.164 clat percentiles (usec): 00:16:21.164 | 1.00th=[15664], 5.00th=[16450], 10.00th=[16909], 20.00th=[18482], 00:16:21.164 | 30.00th=[19268], 40.00th=[19792], 50.00th=[20055], 60.00th=[20055], 00:16:21.164 | 70.00th=[20579], 80.00th=[21103], 90.00th=[21890], 95.00th=[22414], 00:16:21.164 | 99.00th=[22938], 99.50th=[23200], 99.90th=[23462], 99.95th=[23987], 00:16:21.164 | 99.99th=[23987] 00:16:21.164 write: IOPS=3315, BW=13.0MiB/s (13.6MB/s)(13.0MiB/1002msec); 0 zone resets 00:16:21.164 slat (usec): min=4, max=4939, avg=152.48, stdev=560.30 00:16:21.164 clat (usec): min=563, max=24502, avg=19697.76, stdev=2533.03 00:16:21.164 lat (usec): min=4224, max=24517, avg=19850.24, stdev=2497.41 00:16:21.164 clat percentiles (usec): 00:16:21.164 | 1.00th=[ 9241], 5.00th=[15533], 10.00th=[16581], 20.00th=[18482], 00:16:21.164 | 30.00th=[19792], 40.00th=[20055], 50.00th=[20579], 60.00th=[20579], 00:16:21.164 | 70.00th=[20841], 80.00th=[21103], 90.00th=[21365], 95.00th=[22152], 00:16:21.164 | 99.00th=[23462], 99.50th=[23725], 99.90th=[24249], 99.95th=[24511], 00:16:21.164 | 99.99th=[24511] 00:16:21.164 bw ( KiB/s): min=12752, max=12825, per=23.86%, avg=12788.50, stdev=51.62, samples=2 00:16:21.164 iops : min= 3188, max= 3206, avg=3197.00, stdev=12.73, samples=2 00:16:21.164 lat (usec) : 750=0.02% 00:16:21.164 lat (msec) : 10=0.58%, 20=43.82%, 50=55.58% 00:16:21.164 cpu : usr=2.40%, sys=8.49%, ctx=1076, majf=0, minf=17 00:16:21.164 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:16:21.164 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:21.164 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:16:21.164 issued rwts: total=3072,3322,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:21.164 latency : target=0, window=0, percentile=100.00%, depth=128 00:16:21.164 00:16:21.164 Run status group 0 (all jobs): 00:16:21.164 READ: bw=49.6MiB/s (52.0MB/s), 11.7MiB/s-14.0MiB/s (12.3MB/s-14.7MB/s), io=49.8MiB (52.2MB), run=1001-1003msec 00:16:21.164 WRITE: bw=52.3MiB/s (54.9MB/s), 12.0MiB/s-14.0MiB/s (12.5MB/s-14.7MB/s), io=52.5MiB (55.0MB), run=1001-1003msec 00:16:21.164 00:16:21.164 Disk stats (read/write): 00:16:21.164 nvme0n1: ios=2608/2998, merge=0/0, ticks=11580/13260, in_queue=24840, util=87.24% 00:16:21.164 nvme0n2: ios=3075/3079, merge=0/0, ticks=17713/15681, in_queue=33394, util=87.86% 00:16:21.164 nvme0n3: ios=2560/2663, merge=0/0, ticks=13707/10937, in_queue=24644, util=89.20% 00:16:21.164 nvme0n4: ios=2560/2908, merge=0/0, ticks=11499/12967, in_queue=24466, util=89.56% 00:16:21.164 10:08:51 -- target/fio.sh@55 -- # sync 00:16:21.164 10:08:51 -- target/fio.sh@59 -- # fio_pid=86915 00:16:21.164 10:08:51 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:16:21.164 10:08:51 -- target/fio.sh@61 -- # sleep 3 00:16:21.164 [global] 00:16:21.164 thread=1 00:16:21.164 invalidate=1 00:16:21.164 rw=read 00:16:21.164 time_based=1 00:16:21.164 runtime=10 00:16:21.164 ioengine=libaio 00:16:21.164 direct=1 00:16:21.164 bs=4096 00:16:21.164 iodepth=1 00:16:21.164 norandommap=1 00:16:21.164 numjobs=1 00:16:21.164 00:16:21.164 [job0] 00:16:21.164 filename=/dev/nvme0n1 00:16:21.164 [job1] 00:16:21.164 filename=/dev/nvme0n2 00:16:21.164 [job2] 00:16:21.164 filename=/dev/nvme0n3 00:16:21.164 [job3] 00:16:21.164 filename=/dev/nvme0n4 00:16:21.164 Could not set queue depth (nvme0n1) 00:16:21.164 Could not set queue depth (nvme0n2) 00:16:21.164 Could not set queue depth (nvme0n3) 00:16:21.164 Could not set queue depth (nvme0n4) 00:16:21.164 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:21.164 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:21.164 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:21.164 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:16:21.164 fio-3.35 00:16:21.164 Starting 4 threads 00:16:24.451 10:08:54 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:16:24.451 fio: pid=86962, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:24.451 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=40210432, buflen=4096 00:16:24.451 10:08:54 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:16:24.451 fio: pid=86961, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:24.451 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=72404992, buflen=4096 00:16:24.451 10:08:54 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:24.451 10:08:54 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:16:24.710 fio: pid=86959, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:24.710 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=7843840, buflen=4096 00:16:24.710 10:08:55 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:24.710 10:08:55 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:16:24.968 fio: pid=86960, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:16:24.968 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=53530624, buflen=4096 00:16:24.968 00:16:24.968 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=86959: Sun Jul 14 10:08:55 2024 00:16:24.968 read: IOPS=5470, BW=21.4MiB/s (22.4MB/s)(71.5MiB/3345msec) 00:16:24.968 slat (usec): min=8, max=13801, avg=18.65, stdev=178.71 00:16:24.968 clat (usec): min=118, max=3991, avg=162.38, stdev=53.32 00:16:24.968 lat (usec): min=131, max=17793, avg=181.03, stdev=201.70 00:16:24.968 clat percentiles (usec): 00:16:24.968 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:16:24.968 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 157], 00:16:24.968 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 184], 95.00th=[ 221], 00:16:24.968 | 99.00th=[ 258], 99.50th=[ 269], 99.90th=[ 449], 99.95th=[ 734], 00:16:24.968 | 99.99th=[ 3326] 00:16:24.969 bw ( KiB/s): min=20808, max=23112, per=34.54%, avg=22653.33, stdev=907.35, samples=6 00:16:24.969 iops : min= 5202, max= 5778, avg=5663.33, stdev=226.84, samples=6 00:16:24.969 lat (usec) : 250=98.49%, 500=1.42%, 750=0.04%, 1000=0.01% 00:16:24.969 lat (msec) : 2=0.02%, 4=0.02% 00:16:24.969 cpu : usr=1.50%, sys=7.36%, ctx=18317, majf=0, minf=1 00:16:24.969 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:24.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.969 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.969 issued rwts: total=18300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.969 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:24.969 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=86960: Sun Jul 14 10:08:55 2024 00:16:24.969 read: IOPS=3640, BW=14.2MiB/s (14.9MB/s)(51.1MiB/3590msec) 00:16:24.969 slat (usec): min=10, max=10583, avg=21.87, stdev=175.60 00:16:24.969 clat (usec): min=3, max=3491, avg=250.83, stdev=71.92 00:16:24.969 lat (usec): min=129, max=10830, avg=272.71, stdev=193.38 00:16:24.969 clat percentiles (usec): 00:16:24.969 | 1.00th=[ 124], 5.00th=[ 130], 10.00th=[ 145], 20.00th=[ 231], 00:16:24.969 | 30.00th=[ 249], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 269], 00:16:24.969 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 293], 95.00th=[ 302], 00:16:24.969 | 99.00th=[ 318], 99.50th=[ 367], 99.90th=[ 734], 99.95th=[ 1319], 00:16:24.969 | 99.99th=[ 3392] 00:16:24.969 bw ( KiB/s): min=13472, max=14304, per=20.97%, avg=13753.33, stdev=283.77, samples=6 00:16:24.969 iops : min= 3368, max= 3576, avg=3438.33, stdev=70.94, samples=6 00:16:24.969 lat (usec) : 4=0.01%, 100=0.01%, 250=30.17%, 500=69.62%, 750=0.10% 00:16:24.969 lat (usec) : 1000=0.02% 00:16:24.969 lat (msec) : 2=0.06%, 4=0.02% 00:16:24.969 cpu : usr=1.28%, sys=5.38%, ctx=13103, majf=0, minf=1 00:16:24.969 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:24.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.969 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.969 issued rwts: total=13070,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.969 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:24.969 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=86961: Sun Jul 14 10:08:55 2024 00:16:24.969 read: IOPS=5651, BW=22.1MiB/s (23.1MB/s)(69.1MiB/3128msec) 00:16:24.969 slat (usec): min=12, max=15166, avg=17.23, stdev=128.44 00:16:24.969 clat (usec): min=115, max=2423, avg=158.05, stdev=24.62 00:16:24.969 lat (usec): min=137, max=15330, avg=175.28, stdev=130.85 00:16:24.969 clat percentiles (usec): 00:16:24.969 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 147], 00:16:24.969 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 155], 60.00th=[ 159], 00:16:24.969 | 70.00th=[ 163], 80.00th=[ 167], 90.00th=[ 176], 95.00th=[ 182], 00:16:24.969 | 99.00th=[ 194], 99.50th=[ 198], 99.90th=[ 269], 99.95th=[ 408], 00:16:24.969 | 99.99th=[ 1369] 00:16:24.969 bw ( KiB/s): min=21768, max=22944, per=34.52%, avg=22638.67, stdev=452.25, samples=6 00:16:24.969 iops : min= 5442, max= 5736, avg=5659.67, stdev=113.06, samples=6 00:16:24.969 lat (usec) : 250=99.86%, 500=0.09%, 750=0.02%, 1000=0.01% 00:16:24.969 lat (msec) : 2=0.01%, 4=0.01% 00:16:24.969 cpu : usr=1.60%, sys=7.42%, ctx=17683, majf=0, minf=1 00:16:24.969 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:24.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.969 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.969 issued rwts: total=17678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.969 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:24.969 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=86962: Sun Jul 14 10:08:55 2024 00:16:24.969 read: IOPS=3396, BW=13.3MiB/s (13.9MB/s)(38.3MiB/2891msec) 00:16:24.969 slat (nsec): min=18465, max=78570, avg=21581.21, stdev=4209.22 00:16:24.969 clat (usec): min=141, max=2849, avg=270.82, stdev=47.17 00:16:24.969 lat (usec): min=161, max=2883, avg=292.40, stdev=47.45 00:16:24.969 clat percentiles (usec): 00:16:24.969 | 1.00th=[ 237], 5.00th=[ 245], 10.00th=[ 249], 20.00th=[ 255], 00:16:24.969 | 30.00th=[ 262], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:16:24.969 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 289], 95.00th=[ 297], 00:16:24.969 | 99.00th=[ 318], 99.50th=[ 383], 99.90th=[ 627], 99.95th=[ 824], 00:16:24.969 | 99.99th=[ 2835] 00:16:24.969 bw ( KiB/s): min=13528, max=13680, per=20.72%, avg=13588.80, stdev=67.08, samples=5 00:16:24.969 iops : min= 3382, max= 3420, avg=3397.20, stdev=16.77, samples=5 00:16:24.969 lat (usec) : 250=11.41%, 500=88.43%, 750=0.09%, 1000=0.03% 00:16:24.969 lat (msec) : 4=0.03% 00:16:24.969 cpu : usr=1.21%, sys=5.92%, ctx=9819, majf=0, minf=1 00:16:24.969 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:24.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.969 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:24.969 issued rwts: total=9818,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:24.969 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:24.969 00:16:24.969 Run status group 0 (all jobs): 00:16:24.969 READ: bw=64.0MiB/s (67.2MB/s), 13.3MiB/s-22.1MiB/s (13.9MB/s-23.1MB/s), io=230MiB (241MB), run=2891-3590msec 00:16:24.969 00:16:24.969 Disk stats (read/write): 00:16:24.969 nvme0n1: ios=17432/0, merge=0/0, ticks=2877/0, in_queue=2877, util=95.23% 00:16:24.969 nvme0n2: ios=11674/0, merge=0/0, ticks=3157/0, in_queue=3157, util=95.27% 00:16:24.969 nvme0n3: ios=17646/0, merge=0/0, ticks=2837/0, in_queue=2837, util=96.15% 00:16:24.969 nvme0n4: ios=9749/0, merge=0/0, ticks=2710/0, in_queue=2710, util=96.60% 00:16:24.969 10:08:55 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:24.969 10:08:55 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:16:25.226 10:08:55 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:25.226 10:08:55 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:16:25.483 10:08:55 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:25.483 10:08:55 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:16:25.483 10:08:56 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:25.483 10:08:56 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:16:26.047 10:08:56 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:16:26.048 10:08:56 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:16:26.048 10:08:56 -- target/fio.sh@69 -- # fio_status=0 00:16:26.048 10:08:56 -- target/fio.sh@70 -- # wait 86915 00:16:26.048 10:08:56 -- target/fio.sh@70 -- # fio_status=4 00:16:26.048 10:08:56 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:26.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:26.048 10:08:56 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:26.048 10:08:56 -- common/autotest_common.sh@1198 -- # local i=0 00:16:26.048 10:08:56 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:26.048 10:08:56 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:16:26.048 10:08:56 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:16:26.048 10:08:56 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:26.048 nvmf hotplug test: fio failed as expected 00:16:26.048 10:08:56 -- common/autotest_common.sh@1210 -- # return 0 00:16:26.048 10:08:56 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:16:26.048 10:08:56 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:16:26.048 10:08:56 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:26.305 10:08:56 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:16:26.305 10:08:56 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:16:26.305 10:08:56 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:16:26.305 10:08:56 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:16:26.305 10:08:56 -- target/fio.sh@91 -- # nvmftestfini 00:16:26.305 10:08:56 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:26.305 10:08:56 -- nvmf/common.sh@116 -- # sync 00:16:26.575 10:08:56 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:26.575 10:08:56 -- nvmf/common.sh@119 -- # set +e 00:16:26.576 10:08:56 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:26.576 10:08:56 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:26.576 rmmod nvme_tcp 00:16:26.576 rmmod nvme_fabrics 00:16:26.576 rmmod nvme_keyring 00:16:26.576 10:08:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:26.576 10:08:56 -- nvmf/common.sh@123 -- # set -e 00:16:26.576 10:08:56 -- nvmf/common.sh@124 -- # return 0 00:16:26.576 10:08:56 -- nvmf/common.sh@477 -- # '[' -n 86425 ']' 00:16:26.576 10:08:56 -- nvmf/common.sh@478 -- # killprocess 86425 00:16:26.576 10:08:56 -- common/autotest_common.sh@926 -- # '[' -z 86425 ']' 00:16:26.576 10:08:56 -- common/autotest_common.sh@930 -- # kill -0 86425 00:16:26.576 10:08:56 -- common/autotest_common.sh@931 -- # uname 00:16:26.576 10:08:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:26.576 10:08:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 86425 00:16:26.576 killing process with pid 86425 00:16:26.576 10:08:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:26.576 10:08:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:26.576 10:08:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 86425' 00:16:26.576 10:08:56 -- common/autotest_common.sh@945 -- # kill 86425 00:16:26.576 10:08:56 -- common/autotest_common.sh@950 -- # wait 86425 00:16:26.838 10:08:57 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:26.838 10:08:57 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:26.838 10:08:57 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:26.838 10:08:57 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:26.838 10:08:57 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:26.838 10:08:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.838 10:08:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.838 10:08:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.838 10:08:57 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:26.838 00:16:26.838 real 0m18.957s 00:16:26.838 user 1m11.786s 00:16:26.838 sys 0m9.478s 00:16:26.838 10:08:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:26.838 10:08:57 -- common/autotest_common.sh@10 -- # set +x 00:16:26.838 ************************************ 00:16:26.838 END TEST nvmf_fio_target 00:16:26.838 ************************************ 00:16:26.839 10:08:57 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:26.839 10:08:57 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:26.839 10:08:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:26.839 10:08:57 -- common/autotest_common.sh@10 -- # set +x 00:16:26.839 ************************************ 00:16:26.839 START TEST nvmf_bdevio 00:16:26.839 ************************************ 00:16:26.839 10:08:57 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:16:26.839 * Looking for test storage... 00:16:26.839 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:26.839 10:08:57 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:26.839 10:08:57 -- nvmf/common.sh@7 -- # uname -s 00:16:26.839 10:08:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:26.839 10:08:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:26.839 10:08:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:26.839 10:08:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:26.839 10:08:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:26.839 10:08:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:26.839 10:08:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:26.839 10:08:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:26.839 10:08:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:26.839 10:08:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:26.839 10:08:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:16:26.839 10:08:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:16:26.839 10:08:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:26.839 10:08:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:26.839 10:08:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:26.839 10:08:57 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:26.839 10:08:57 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:26.839 10:08:57 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:26.839 10:08:57 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:26.839 10:08:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.839 10:08:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.839 10:08:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.839 10:08:57 -- paths/export.sh@5 -- # export PATH 00:16:26.839 10:08:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:26.839 10:08:57 -- nvmf/common.sh@46 -- # : 0 00:16:26.839 10:08:57 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:26.839 10:08:57 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:26.839 10:08:57 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:26.839 10:08:57 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:26.839 10:08:57 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:26.839 10:08:57 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:26.839 10:08:57 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:26.839 10:08:57 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:26.839 10:08:57 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:26.839 10:08:57 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:26.839 10:08:57 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:26.839 10:08:57 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:26.839 10:08:57 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:26.839 10:08:57 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:26.839 10:08:57 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:26.839 10:08:57 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:26.839 10:08:57 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:26.839 10:08:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:26.839 10:08:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:26.839 10:08:57 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:26.839 10:08:57 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:26.839 10:08:57 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:26.839 10:08:57 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:26.839 10:08:57 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:26.839 10:08:57 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:26.839 10:08:57 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:26.839 10:08:57 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:26.839 10:08:57 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:26.839 10:08:57 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:26.839 10:08:57 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:26.839 10:08:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:26.839 10:08:57 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:26.839 10:08:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:26.839 10:08:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:26.839 10:08:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:26.839 10:08:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:26.839 10:08:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:26.839 10:08:57 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:26.839 10:08:57 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:26.839 Cannot find device "nvmf_tgt_br" 00:16:26.839 10:08:57 -- nvmf/common.sh@154 -- # true 00:16:26.839 10:08:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:26.839 Cannot find device "nvmf_tgt_br2" 00:16:26.839 10:08:57 -- nvmf/common.sh@155 -- # true 00:16:26.839 10:08:57 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:26.839 10:08:57 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:27.098 Cannot find device "nvmf_tgt_br" 00:16:27.098 10:08:57 -- nvmf/common.sh@157 -- # true 00:16:27.098 10:08:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:27.098 Cannot find device "nvmf_tgt_br2" 00:16:27.098 10:08:57 -- nvmf/common.sh@158 -- # true 00:16:27.098 10:08:57 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:27.098 10:08:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:27.098 10:08:57 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:27.098 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:27.098 10:08:57 -- nvmf/common.sh@161 -- # true 00:16:27.098 10:08:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:27.098 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:27.098 10:08:57 -- nvmf/common.sh@162 -- # true 00:16:27.098 10:08:57 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:27.098 10:08:57 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:27.098 10:08:57 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:27.098 10:08:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:27.098 10:08:57 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:27.098 10:08:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:27.098 10:08:57 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:27.098 10:08:57 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:27.098 10:08:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:27.098 10:08:57 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:27.098 10:08:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:27.098 10:08:57 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:27.098 10:08:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:27.098 10:08:57 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:27.098 10:08:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:27.098 10:08:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:27.098 10:08:57 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:27.098 10:08:57 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:27.098 10:08:57 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:27.098 10:08:57 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:27.098 10:08:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:27.098 10:08:57 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:27.098 10:08:57 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:27.098 10:08:57 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:27.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:27.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:16:27.098 00:16:27.098 --- 10.0.0.2 ping statistics --- 00:16:27.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.098 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:16:27.356 10:08:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:27.356 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:27.356 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:16:27.356 00:16:27.356 --- 10.0.0.3 ping statistics --- 00:16:27.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.356 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:16:27.356 10:08:57 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:27.356 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:27.356 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:16:27.356 00:16:27.356 --- 10.0.0.1 ping statistics --- 00:16:27.356 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:27.356 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:16:27.356 10:08:57 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:27.356 10:08:57 -- nvmf/common.sh@421 -- # return 0 00:16:27.356 10:08:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:27.356 10:08:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:27.356 10:08:57 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:27.356 10:08:57 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:27.356 10:08:57 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:27.356 10:08:57 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:27.356 10:08:57 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:27.356 10:08:57 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:27.356 10:08:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:27.356 10:08:57 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:27.356 10:08:57 -- common/autotest_common.sh@10 -- # set +x 00:16:27.356 10:08:57 -- nvmf/common.sh@469 -- # nvmfpid=87287 00:16:27.356 10:08:57 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:16:27.357 10:08:57 -- nvmf/common.sh@470 -- # waitforlisten 87287 00:16:27.357 10:08:57 -- common/autotest_common.sh@819 -- # '[' -z 87287 ']' 00:16:27.357 10:08:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:27.357 10:08:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:27.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:27.357 10:08:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:27.357 10:08:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:27.357 10:08:57 -- common/autotest_common.sh@10 -- # set +x 00:16:27.357 [2024-07-14 10:08:57.732428] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:27.357 [2024-07-14 10:08:57.732521] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:27.357 [2024-07-14 10:08:57.871770] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:27.614 [2024-07-14 10:08:57.929396] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:27.614 [2024-07-14 10:08:57.929559] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:27.614 [2024-07-14 10:08:57.929571] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:27.614 [2024-07-14 10:08:57.929579] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:27.614 [2024-07-14 10:08:57.929734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:27.614 [2024-07-14 10:08:57.930793] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:27.614 [2024-07-14 10:08:57.930900] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:27.614 [2024-07-14 10:08:57.930903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:28.179 10:08:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:28.179 10:08:58 -- common/autotest_common.sh@852 -- # return 0 00:16:28.179 10:08:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:28.179 10:08:58 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:28.179 10:08:58 -- common/autotest_common.sh@10 -- # set +x 00:16:28.437 10:08:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:28.437 10:08:58 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:28.437 10:08:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:28.437 10:08:58 -- common/autotest_common.sh@10 -- # set +x 00:16:28.437 [2024-07-14 10:08:58.764118] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:28.437 10:08:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:28.437 10:08:58 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:28.437 10:08:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:28.437 10:08:58 -- common/autotest_common.sh@10 -- # set +x 00:16:28.437 Malloc0 00:16:28.437 10:08:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:28.437 10:08:58 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:28.437 10:08:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:28.437 10:08:58 -- common/autotest_common.sh@10 -- # set +x 00:16:28.437 10:08:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:28.437 10:08:58 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:28.437 10:08:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:28.437 10:08:58 -- common/autotest_common.sh@10 -- # set +x 00:16:28.437 10:08:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:28.438 10:08:58 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:28.438 10:08:58 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:28.438 10:08:58 -- common/autotest_common.sh@10 -- # set +x 00:16:28.438 [2024-07-14 10:08:58.822515] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:28.438 10:08:58 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:28.438 10:08:58 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:16:28.438 10:08:58 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:28.438 10:08:58 -- nvmf/common.sh@520 -- # config=() 00:16:28.438 10:08:58 -- nvmf/common.sh@520 -- # local subsystem config 00:16:28.438 10:08:58 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:28.438 10:08:58 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:28.438 { 00:16:28.438 "params": { 00:16:28.438 "name": "Nvme$subsystem", 00:16:28.438 "trtype": "$TEST_TRANSPORT", 00:16:28.438 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:28.438 "adrfam": "ipv4", 00:16:28.438 "trsvcid": "$NVMF_PORT", 00:16:28.438 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:28.438 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:28.438 "hdgst": ${hdgst:-false}, 00:16:28.438 "ddgst": ${ddgst:-false} 00:16:28.438 }, 00:16:28.438 "method": "bdev_nvme_attach_controller" 00:16:28.438 } 00:16:28.438 EOF 00:16:28.438 )") 00:16:28.438 10:08:58 -- nvmf/common.sh@542 -- # cat 00:16:28.438 10:08:58 -- nvmf/common.sh@544 -- # jq . 00:16:28.438 10:08:58 -- nvmf/common.sh@545 -- # IFS=, 00:16:28.438 10:08:58 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:28.438 "params": { 00:16:28.438 "name": "Nvme1", 00:16:28.438 "trtype": "tcp", 00:16:28.438 "traddr": "10.0.0.2", 00:16:28.438 "adrfam": "ipv4", 00:16:28.438 "trsvcid": "4420", 00:16:28.438 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:28.438 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:28.438 "hdgst": false, 00:16:28.438 "ddgst": false 00:16:28.438 }, 00:16:28.438 "method": "bdev_nvme_attach_controller" 00:16:28.438 }' 00:16:28.438 [2024-07-14 10:08:58.880705] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:28.438 [2024-07-14 10:08:58.880777] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87341 ] 00:16:28.696 [2024-07-14 10:08:59.023662] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:28.696 [2024-07-14 10:08:59.082434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.696 [2024-07-14 10:08:59.082583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:28.696 [2024-07-14 10:08:59.082586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.954 [2024-07-14 10:08:59.252828] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:28.954 [2024-07-14 10:08:59.252870] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:28.954 I/O targets: 00:16:28.954 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:28.954 00:16:28.954 00:16:28.954 CUnit - A unit testing framework for C - Version 2.1-3 00:16:28.954 http://cunit.sourceforge.net/ 00:16:28.954 00:16:28.954 00:16:28.954 Suite: bdevio tests on: Nvme1n1 00:16:28.954 Test: blockdev write read block ...passed 00:16:28.954 Test: blockdev write zeroes read block ...passed 00:16:28.954 Test: blockdev write zeroes read no split ...passed 00:16:28.954 Test: blockdev write zeroes read split ...passed 00:16:28.954 Test: blockdev write zeroes read split partial ...passed 00:16:28.954 Test: blockdev reset ...[2024-07-14 10:08:59.368333] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:28.954 [2024-07-14 10:08:59.368431] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18724e0 (9): Bad file descriptor 00:16:28.954 [2024-07-14 10:08:59.382990] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:28.954 passed 00:16:28.954 Test: blockdev write read 8 blocks ...passed 00:16:28.954 Test: blockdev write read size > 128k ...passed 00:16:28.954 Test: blockdev write read invalid size ...passed 00:16:28.954 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:28.954 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:28.954 Test: blockdev write read max offset ...passed 00:16:29.212 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:29.212 Test: blockdev writev readv 8 blocks ...passed 00:16:29.212 Test: blockdev writev readv 30 x 1block ...passed 00:16:29.212 Test: blockdev writev readv block ...passed 00:16:29.212 Test: blockdev writev readv size > 128k ...passed 00:16:29.212 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:29.212 Test: blockdev comparev and writev ...[2024-07-14 10:08:59.552521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:29.212 [2024-07-14 10:08:59.552562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:29.212 [2024-07-14 10:08:59.552614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:29.212 [2024-07-14 10:08:59.552625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:29.212 [2024-07-14 10:08:59.552973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:29.212 [2024-07-14 10:08:59.553000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:29.212 [2024-07-14 10:08:59.553017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:29.212 [2024-07-14 10:08:59.553028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:29.212 [2024-07-14 10:08:59.553420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:29.212 [2024-07-14 10:08:59.553446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:29.212 [2024-07-14 10:08:59.553463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:29.212 [2024-07-14 10:08:59.553474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:29.212 [2024-07-14 10:08:59.553844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:29.212 [2024-07-14 10:08:59.553867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:29.212 [2024-07-14 10:08:59.553884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:29.212 [2024-07-14 10:08:59.553894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:29.213 passed 00:16:29.213 Test: blockdev nvme passthru rw ...passed 00:16:29.213 Test: blockdev nvme passthru vendor specific ...[2024-07-14 10:08:59.635528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:29.213 [2024-07-14 10:08:59.635562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:29.213 [2024-07-14 10:08:59.635684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:29.213 [2024-07-14 10:08:59.635700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:29.213 [2024-07-14 10:08:59.635806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:29.213 [2024-07-14 10:08:59.635831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:29.213 [2024-07-14 10:08:59.635934] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:29.213 [2024-07-14 10:08:59.635958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:29.213 passed 00:16:29.213 Test: blockdev nvme admin passthru ...passed 00:16:29.213 Test: blockdev copy ...passed 00:16:29.213 00:16:29.213 Run Summary: Type Total Ran Passed Failed Inactive 00:16:29.213 suites 1 1 n/a 0 0 00:16:29.213 tests 23 23 23 0 0 00:16:29.213 asserts 152 152 152 0 n/a 00:16:29.213 00:16:29.213 Elapsed time = 0.873 seconds 00:16:29.470 10:08:59 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:29.470 10:08:59 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:29.470 10:08:59 -- common/autotest_common.sh@10 -- # set +x 00:16:29.470 10:08:59 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:29.470 10:08:59 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:29.470 10:08:59 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:29.470 10:08:59 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:29.470 10:08:59 -- nvmf/common.sh@116 -- # sync 00:16:29.470 10:08:59 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:29.470 10:08:59 -- nvmf/common.sh@119 -- # set +e 00:16:29.470 10:08:59 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:29.470 10:08:59 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:29.470 rmmod nvme_tcp 00:16:29.471 rmmod nvme_fabrics 00:16:29.471 rmmod nvme_keyring 00:16:29.471 10:08:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:29.471 10:08:59 -- nvmf/common.sh@123 -- # set -e 00:16:29.471 10:08:59 -- nvmf/common.sh@124 -- # return 0 00:16:29.471 10:08:59 -- nvmf/common.sh@477 -- # '[' -n 87287 ']' 00:16:29.471 10:08:59 -- nvmf/common.sh@478 -- # killprocess 87287 00:16:29.471 10:08:59 -- common/autotest_common.sh@926 -- # '[' -z 87287 ']' 00:16:29.471 10:08:59 -- common/autotest_common.sh@930 -- # kill -0 87287 00:16:29.471 10:08:59 -- common/autotest_common.sh@931 -- # uname 00:16:29.471 10:08:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:29.471 10:08:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87287 00:16:29.471 killing process with pid 87287 00:16:29.471 10:09:00 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:16:29.471 10:09:00 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:16:29.471 10:09:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87287' 00:16:29.471 10:09:00 -- common/autotest_common.sh@945 -- # kill 87287 00:16:29.471 10:09:00 -- common/autotest_common.sh@950 -- # wait 87287 00:16:29.729 10:09:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:29.729 10:09:00 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:29.729 10:09:00 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:29.729 10:09:00 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:29.729 10:09:00 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:29.729 10:09:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.729 10:09:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:29.729 10:09:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.729 10:09:00 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:29.729 00:16:29.729 real 0m3.032s 00:16:29.729 user 0m11.123s 00:16:29.729 sys 0m0.778s 00:16:29.729 10:09:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:29.729 10:09:00 -- common/autotest_common.sh@10 -- # set +x 00:16:29.729 ************************************ 00:16:29.729 END TEST nvmf_bdevio 00:16:29.729 ************************************ 00:16:29.988 10:09:00 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:16:29.988 10:09:00 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:29.988 10:09:00 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:16:29.988 10:09:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:29.988 10:09:00 -- common/autotest_common.sh@10 -- # set +x 00:16:29.988 ************************************ 00:16:29.988 START TEST nvmf_bdevio_no_huge 00:16:29.988 ************************************ 00:16:29.988 10:09:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:29.988 * Looking for test storage... 00:16:29.988 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:29.988 10:09:00 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:29.988 10:09:00 -- nvmf/common.sh@7 -- # uname -s 00:16:29.988 10:09:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:29.988 10:09:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:29.988 10:09:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:29.988 10:09:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:29.988 10:09:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:29.988 10:09:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:29.988 10:09:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:29.988 10:09:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:29.988 10:09:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:29.988 10:09:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:29.988 10:09:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:16:29.988 10:09:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:16:29.988 10:09:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:29.988 10:09:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:29.988 10:09:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:29.988 10:09:00 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:29.988 10:09:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:29.988 10:09:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:29.988 10:09:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:29.988 10:09:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.988 10:09:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.988 10:09:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.988 10:09:00 -- paths/export.sh@5 -- # export PATH 00:16:29.988 10:09:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:29.988 10:09:00 -- nvmf/common.sh@46 -- # : 0 00:16:29.988 10:09:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:29.988 10:09:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:29.988 10:09:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:29.988 10:09:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:29.988 10:09:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:29.988 10:09:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:29.988 10:09:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:29.988 10:09:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:29.988 10:09:00 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:29.988 10:09:00 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:29.988 10:09:00 -- target/bdevio.sh@14 -- # nvmftestinit 00:16:29.988 10:09:00 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:29.988 10:09:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:29.988 10:09:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:29.988 10:09:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:29.988 10:09:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:29.988 10:09:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.988 10:09:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:29.988 10:09:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.988 10:09:00 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:29.988 10:09:00 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:29.989 10:09:00 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:29.989 10:09:00 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:29.989 10:09:00 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:29.989 10:09:00 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:29.989 10:09:00 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:29.989 10:09:00 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:29.989 10:09:00 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:29.989 10:09:00 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:29.989 10:09:00 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:29.989 10:09:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:29.989 10:09:00 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:29.989 10:09:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:29.989 10:09:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:29.989 10:09:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:29.989 10:09:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:29.989 10:09:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:29.989 10:09:00 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:29.989 10:09:00 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:29.989 Cannot find device "nvmf_tgt_br" 00:16:29.989 10:09:00 -- nvmf/common.sh@154 -- # true 00:16:29.989 10:09:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:29.989 Cannot find device "nvmf_tgt_br2" 00:16:29.989 10:09:00 -- nvmf/common.sh@155 -- # true 00:16:29.989 10:09:00 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:29.989 10:09:00 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:29.989 Cannot find device "nvmf_tgt_br" 00:16:29.989 10:09:00 -- nvmf/common.sh@157 -- # true 00:16:29.989 10:09:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:29.989 Cannot find device "nvmf_tgt_br2" 00:16:29.989 10:09:00 -- nvmf/common.sh@158 -- # true 00:16:29.989 10:09:00 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:29.989 10:09:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:30.247 10:09:00 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:30.247 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:30.247 10:09:00 -- nvmf/common.sh@161 -- # true 00:16:30.247 10:09:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:30.247 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:30.247 10:09:00 -- nvmf/common.sh@162 -- # true 00:16:30.247 10:09:00 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:30.247 10:09:00 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:30.247 10:09:00 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:30.247 10:09:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:30.247 10:09:00 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:30.247 10:09:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:30.247 10:09:00 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:30.247 10:09:00 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:30.247 10:09:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:30.247 10:09:00 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:30.247 10:09:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:30.247 10:09:00 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:30.247 10:09:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:30.247 10:09:00 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:30.247 10:09:00 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:30.247 10:09:00 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:30.247 10:09:00 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:30.247 10:09:00 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:30.247 10:09:00 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:30.247 10:09:00 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:30.247 10:09:00 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:30.247 10:09:00 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:30.247 10:09:00 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:30.247 10:09:00 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:30.247 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.247 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:16:30.247 00:16:30.247 --- 10.0.0.2 ping statistics --- 00:16:30.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.247 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:16:30.247 10:09:00 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:30.247 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:30.247 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:16:30.247 00:16:30.247 --- 10.0.0.3 ping statistics --- 00:16:30.247 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.248 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:16:30.248 10:09:00 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:30.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:16:30.248 00:16:30.248 --- 10.0.0.1 ping statistics --- 00:16:30.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.248 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:16:30.248 10:09:00 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.248 10:09:00 -- nvmf/common.sh@421 -- # return 0 00:16:30.248 10:09:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:30.248 10:09:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.248 10:09:00 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:30.248 10:09:00 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:30.248 10:09:00 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.248 10:09:00 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:30.248 10:09:00 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:30.248 10:09:00 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:30.248 10:09:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:30.248 10:09:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:30.248 10:09:00 -- common/autotest_common.sh@10 -- # set +x 00:16:30.248 10:09:00 -- nvmf/common.sh@469 -- # nvmfpid=87514 00:16:30.248 10:09:00 -- nvmf/common.sh@470 -- # waitforlisten 87514 00:16:30.248 10:09:00 -- common/autotest_common.sh@819 -- # '[' -z 87514 ']' 00:16:30.248 10:09:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.248 10:09:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:30.248 10:09:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:30.248 10:09:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.248 10:09:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:30.248 10:09:00 -- common/autotest_common.sh@10 -- # set +x 00:16:30.506 [2024-07-14 10:09:00.818571] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:30.506 [2024-07-14 10:09:00.818693] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:30.506 [2024-07-14 10:09:00.950959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:30.506 [2024-07-14 10:09:01.035817] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:30.506 [2024-07-14 10:09:01.035969] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:30.506 [2024-07-14 10:09:01.035980] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:30.506 [2024-07-14 10:09:01.035988] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:30.506 [2024-07-14 10:09:01.036120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:30.506 [2024-07-14 10:09:01.036261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:30.506 [2024-07-14 10:09:01.036549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:30.506 [2024-07-14 10:09:01.036735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:31.439 10:09:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:31.439 10:09:01 -- common/autotest_common.sh@852 -- # return 0 00:16:31.439 10:09:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:31.439 10:09:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:31.439 10:09:01 -- common/autotest_common.sh@10 -- # set +x 00:16:31.439 10:09:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:31.439 10:09:01 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:31.439 10:09:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.439 10:09:01 -- common/autotest_common.sh@10 -- # set +x 00:16:31.439 [2024-07-14 10:09:01.847101] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:31.439 10:09:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.439 10:09:01 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:31.439 10:09:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.439 10:09:01 -- common/autotest_common.sh@10 -- # set +x 00:16:31.439 Malloc0 00:16:31.439 10:09:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.439 10:09:01 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:31.439 10:09:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.439 10:09:01 -- common/autotest_common.sh@10 -- # set +x 00:16:31.439 10:09:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.439 10:09:01 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:31.439 10:09:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.439 10:09:01 -- common/autotest_common.sh@10 -- # set +x 00:16:31.439 10:09:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.439 10:09:01 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:31.439 10:09:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:31.439 10:09:01 -- common/autotest_common.sh@10 -- # set +x 00:16:31.439 [2024-07-14 10:09:01.885653] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:31.439 10:09:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:31.439 10:09:01 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:31.439 10:09:01 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:31.439 10:09:01 -- nvmf/common.sh@520 -- # config=() 00:16:31.439 10:09:01 -- nvmf/common.sh@520 -- # local subsystem config 00:16:31.439 10:09:01 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:16:31.439 10:09:01 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:16:31.439 { 00:16:31.439 "params": { 00:16:31.439 "name": "Nvme$subsystem", 00:16:31.439 "trtype": "$TEST_TRANSPORT", 00:16:31.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:31.439 "adrfam": "ipv4", 00:16:31.439 "trsvcid": "$NVMF_PORT", 00:16:31.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:31.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:31.439 "hdgst": ${hdgst:-false}, 00:16:31.439 "ddgst": ${ddgst:-false} 00:16:31.439 }, 00:16:31.439 "method": "bdev_nvme_attach_controller" 00:16:31.439 } 00:16:31.439 EOF 00:16:31.439 )") 00:16:31.439 10:09:01 -- nvmf/common.sh@542 -- # cat 00:16:31.439 10:09:01 -- nvmf/common.sh@544 -- # jq . 00:16:31.439 10:09:01 -- nvmf/common.sh@545 -- # IFS=, 00:16:31.439 10:09:01 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:16:31.439 "params": { 00:16:31.439 "name": "Nvme1", 00:16:31.439 "trtype": "tcp", 00:16:31.439 "traddr": "10.0.0.2", 00:16:31.439 "adrfam": "ipv4", 00:16:31.439 "trsvcid": "4420", 00:16:31.439 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:31.439 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:31.439 "hdgst": false, 00:16:31.439 "ddgst": false 00:16:31.439 }, 00:16:31.439 "method": "bdev_nvme_attach_controller" 00:16:31.439 }' 00:16:31.439 [2024-07-14 10:09:01.942852] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:31.439 [2024-07-14 10:09:01.942943] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid87573 ] 00:16:31.697 [2024-07-14 10:09:02.084820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:31.697 [2024-07-14 10:09:02.199123] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.697 [2024-07-14 10:09:02.199244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:31.697 [2024-07-14 10:09:02.199247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.953 [2024-07-14 10:09:02.369372] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:16:31.953 [2024-07-14 10:09:02.369433] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:16:31.953 I/O targets: 00:16:31.953 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:31.953 00:16:31.953 00:16:31.953 CUnit - A unit testing framework for C - Version 2.1-3 00:16:31.953 http://cunit.sourceforge.net/ 00:16:31.953 00:16:31.953 00:16:31.953 Suite: bdevio tests on: Nvme1n1 00:16:31.953 Test: blockdev write read block ...passed 00:16:31.953 Test: blockdev write zeroes read block ...passed 00:16:31.953 Test: blockdev write zeroes read no split ...passed 00:16:31.953 Test: blockdev write zeroes read split ...passed 00:16:31.954 Test: blockdev write zeroes read split partial ...passed 00:16:31.954 Test: blockdev reset ...[2024-07-14 10:09:02.493112] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:31.954 [2024-07-14 10:09:02.493217] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x611160 (9): Bad file descriptor 00:16:32.211 [2024-07-14 10:09:02.507968] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:32.211 passed 00:16:32.211 Test: blockdev write read 8 blocks ...passed 00:16:32.211 Test: blockdev write read size > 128k ...passed 00:16:32.211 Test: blockdev write read invalid size ...passed 00:16:32.211 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:32.211 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:32.211 Test: blockdev write read max offset ...passed 00:16:32.211 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:32.211 Test: blockdev writev readv 8 blocks ...passed 00:16:32.211 Test: blockdev writev readv 30 x 1block ...passed 00:16:32.211 Test: blockdev writev readv block ...passed 00:16:32.211 Test: blockdev writev readv size > 128k ...passed 00:16:32.211 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:32.211 Test: blockdev comparev and writev ...[2024-07-14 10:09:02.678404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:32.211 [2024-07-14 10:09:02.678461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:32.211 [2024-07-14 10:09:02.678498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:32.211 [2024-07-14 10:09:02.678509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:32.211 [2024-07-14 10:09:02.678902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:32.211 [2024-07-14 10:09:02.678929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:32.211 [2024-07-14 10:09:02.678946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:32.211 [2024-07-14 10:09:02.678956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:32.212 [2024-07-14 10:09:02.679314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:32.212 [2024-07-14 10:09:02.679340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:32.212 [2024-07-14 10:09:02.679357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:32.212 [2024-07-14 10:09:02.679367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:32.212 [2024-07-14 10:09:02.679704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:32.212 [2024-07-14 10:09:02.679729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:32.212 [2024-07-14 10:09:02.679746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:32.212 [2024-07-14 10:09:02.679756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:32.212 passed 00:16:32.470 Test: blockdev nvme passthru rw ...passed 00:16:32.470 Test: blockdev nvme passthru vendor specific ...[2024-07-14 10:09:02.761577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:32.470 [2024-07-14 10:09:02.761611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:32.470 [2024-07-14 10:09:02.761731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:32.470 [2024-07-14 10:09:02.761748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:32.470 [2024-07-14 10:09:02.761855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:32.470 [2024-07-14 10:09:02.761879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:32.470 [2024-07-14 10:09:02.761987] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:32.470 [2024-07-14 10:09:02.762011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:32.470 passed 00:16:32.470 Test: blockdev nvme admin passthru ...passed 00:16:32.470 Test: blockdev copy ...passed 00:16:32.470 00:16:32.470 Run Summary: Type Total Ran Passed Failed Inactive 00:16:32.470 suites 1 1 n/a 0 0 00:16:32.470 tests 23 23 23 0 0 00:16:32.470 asserts 152 152 152 0 n/a 00:16:32.470 00:16:32.470 Elapsed time = 0.893 seconds 00:16:32.728 10:09:03 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:32.728 10:09:03 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:32.728 10:09:03 -- common/autotest_common.sh@10 -- # set +x 00:16:32.728 10:09:03 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:32.728 10:09:03 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:32.728 10:09:03 -- target/bdevio.sh@30 -- # nvmftestfini 00:16:32.728 10:09:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:32.728 10:09:03 -- nvmf/common.sh@116 -- # sync 00:16:32.728 10:09:03 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:32.728 10:09:03 -- nvmf/common.sh@119 -- # set +e 00:16:32.728 10:09:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:32.728 10:09:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:32.728 rmmod nvme_tcp 00:16:32.728 rmmod nvme_fabrics 00:16:32.728 rmmod nvme_keyring 00:16:32.728 10:09:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:32.728 10:09:03 -- nvmf/common.sh@123 -- # set -e 00:16:32.728 10:09:03 -- nvmf/common.sh@124 -- # return 0 00:16:32.728 10:09:03 -- nvmf/common.sh@477 -- # '[' -n 87514 ']' 00:16:32.728 10:09:03 -- nvmf/common.sh@478 -- # killprocess 87514 00:16:32.728 10:09:03 -- common/autotest_common.sh@926 -- # '[' -z 87514 ']' 00:16:32.728 10:09:03 -- common/autotest_common.sh@930 -- # kill -0 87514 00:16:32.728 10:09:03 -- common/autotest_common.sh@931 -- # uname 00:16:32.728 10:09:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:32.728 10:09:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87514 00:16:32.728 10:09:03 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:16:32.728 10:09:03 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:16:32.728 10:09:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87514' 00:16:32.728 killing process with pid 87514 00:16:32.728 10:09:03 -- common/autotest_common.sh@945 -- # kill 87514 00:16:32.728 10:09:03 -- common/autotest_common.sh@950 -- # wait 87514 00:16:33.294 10:09:03 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:33.294 10:09:03 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:33.294 10:09:03 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:33.294 10:09:03 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:33.294 10:09:03 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:33.294 10:09:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.294 10:09:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:33.294 10:09:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.294 10:09:03 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:33.294 00:16:33.294 real 0m3.304s 00:16:33.294 user 0m12.016s 00:16:33.294 sys 0m1.189s 00:16:33.294 10:09:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:33.294 ************************************ 00:16:33.294 10:09:03 -- common/autotest_common.sh@10 -- # set +x 00:16:33.294 END TEST nvmf_bdevio_no_huge 00:16:33.294 ************************************ 00:16:33.294 10:09:03 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:33.294 10:09:03 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:33.294 10:09:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:33.294 10:09:03 -- common/autotest_common.sh@10 -- # set +x 00:16:33.294 ************************************ 00:16:33.294 START TEST nvmf_tls 00:16:33.294 ************************************ 00:16:33.294 10:09:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:33.294 * Looking for test storage... 00:16:33.294 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:33.294 10:09:03 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:33.294 10:09:03 -- nvmf/common.sh@7 -- # uname -s 00:16:33.294 10:09:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:33.294 10:09:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:33.294 10:09:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:33.294 10:09:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:33.294 10:09:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:33.294 10:09:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:33.294 10:09:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:33.294 10:09:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:33.294 10:09:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:33.294 10:09:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:33.294 10:09:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:16:33.294 10:09:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:16:33.294 10:09:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:33.294 10:09:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:33.294 10:09:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:33.294 10:09:03 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:33.294 10:09:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:33.294 10:09:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:33.294 10:09:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:33.294 10:09:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.294 10:09:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.294 10:09:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.294 10:09:03 -- paths/export.sh@5 -- # export PATH 00:16:33.294 10:09:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.294 10:09:03 -- nvmf/common.sh@46 -- # : 0 00:16:33.294 10:09:03 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:33.294 10:09:03 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:33.294 10:09:03 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:33.294 10:09:03 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:33.294 10:09:03 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:33.294 10:09:03 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:33.294 10:09:03 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:33.294 10:09:03 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:33.294 10:09:03 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:33.294 10:09:03 -- target/tls.sh@71 -- # nvmftestinit 00:16:33.294 10:09:03 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:33.294 10:09:03 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:33.294 10:09:03 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:33.294 10:09:03 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:33.294 10:09:03 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:33.294 10:09:03 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.294 10:09:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:33.294 10:09:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.294 10:09:03 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:33.294 10:09:03 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:33.294 10:09:03 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:33.294 10:09:03 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:33.294 10:09:03 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:33.295 10:09:03 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:33.295 10:09:03 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:33.295 10:09:03 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:33.295 10:09:03 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:33.295 10:09:03 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:33.295 10:09:03 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:33.295 10:09:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:33.295 10:09:03 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:33.295 10:09:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:33.295 10:09:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:33.295 10:09:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:33.295 10:09:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:33.295 10:09:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:33.295 10:09:03 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:33.295 10:09:03 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:33.295 Cannot find device "nvmf_tgt_br" 00:16:33.295 10:09:03 -- nvmf/common.sh@154 -- # true 00:16:33.295 10:09:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:33.295 Cannot find device "nvmf_tgt_br2" 00:16:33.295 10:09:03 -- nvmf/common.sh@155 -- # true 00:16:33.295 10:09:03 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:33.295 10:09:03 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:33.295 Cannot find device "nvmf_tgt_br" 00:16:33.295 10:09:03 -- nvmf/common.sh@157 -- # true 00:16:33.295 10:09:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:33.295 Cannot find device "nvmf_tgt_br2" 00:16:33.295 10:09:03 -- nvmf/common.sh@158 -- # true 00:16:33.295 10:09:03 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:33.553 10:09:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:33.553 10:09:03 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:33.553 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:33.553 10:09:03 -- nvmf/common.sh@161 -- # true 00:16:33.553 10:09:03 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:33.553 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:33.553 10:09:03 -- nvmf/common.sh@162 -- # true 00:16:33.553 10:09:03 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:33.553 10:09:03 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:33.553 10:09:03 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:33.553 10:09:03 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:33.553 10:09:03 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:33.553 10:09:03 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:33.553 10:09:03 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:33.553 10:09:03 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:33.553 10:09:03 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:33.553 10:09:03 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:33.553 10:09:03 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:33.553 10:09:03 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:33.553 10:09:03 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:33.553 10:09:03 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:33.553 10:09:03 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:33.553 10:09:03 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:33.553 10:09:04 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:33.553 10:09:04 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:33.553 10:09:04 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:33.553 10:09:04 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:33.553 10:09:04 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:33.553 10:09:04 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:33.553 10:09:04 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:33.553 10:09:04 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:33.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:33.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:16:33.553 00:16:33.553 --- 10.0.0.2 ping statistics --- 00:16:33.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.553 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:16:33.553 10:09:04 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:33.553 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:33.553 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:16:33.553 00:16:33.553 --- 10.0.0.3 ping statistics --- 00:16:33.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.553 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:16:33.553 10:09:04 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:33.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:33.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:16:33.553 00:16:33.553 --- 10.0.0.1 ping statistics --- 00:16:33.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:33.553 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:16:33.553 10:09:04 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:33.553 10:09:04 -- nvmf/common.sh@421 -- # return 0 00:16:33.553 10:09:04 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:33.553 10:09:04 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:33.553 10:09:04 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:33.553 10:09:04 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:33.553 10:09:04 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:33.553 10:09:04 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:33.553 10:09:04 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:33.553 10:09:04 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:33.553 10:09:04 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:33.553 10:09:04 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:33.553 10:09:04 -- common/autotest_common.sh@10 -- # set +x 00:16:33.553 10:09:04 -- nvmf/common.sh@469 -- # nvmfpid=87751 00:16:33.553 10:09:04 -- nvmf/common.sh@470 -- # waitforlisten 87751 00:16:33.553 10:09:04 -- common/autotest_common.sh@819 -- # '[' -z 87751 ']' 00:16:33.553 10:09:04 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:33.553 10:09:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:33.553 10:09:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:33.553 10:09:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:33.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:33.553 10:09:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:33.553 10:09:04 -- common/autotest_common.sh@10 -- # set +x 00:16:33.811 [2024-07-14 10:09:04.133226] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:33.811 [2024-07-14 10:09:04.133305] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:33.811 [2024-07-14 10:09:04.272372] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:33.811 [2024-07-14 10:09:04.334624] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:33.811 [2024-07-14 10:09:04.334820] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:33.811 [2024-07-14 10:09:04.334836] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:33.811 [2024-07-14 10:09:04.334847] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:33.811 [2024-07-14 10:09:04.334877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.745 10:09:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:34.745 10:09:05 -- common/autotest_common.sh@852 -- # return 0 00:16:34.745 10:09:05 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:34.745 10:09:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:34.745 10:09:05 -- common/autotest_common.sh@10 -- # set +x 00:16:34.745 10:09:05 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.745 10:09:05 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:16:34.745 10:09:05 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:35.003 true 00:16:35.003 10:09:05 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:35.003 10:09:05 -- target/tls.sh@82 -- # jq -r .tls_version 00:16:35.262 10:09:05 -- target/tls.sh@82 -- # version=0 00:16:35.262 10:09:05 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:16:35.262 10:09:05 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:35.520 10:09:05 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:35.520 10:09:05 -- target/tls.sh@90 -- # jq -r .tls_version 00:16:35.520 10:09:06 -- target/tls.sh@90 -- # version=13 00:16:35.520 10:09:06 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:16:35.520 10:09:06 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:35.779 10:09:06 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:35.779 10:09:06 -- target/tls.sh@98 -- # jq -r .tls_version 00:16:36.037 10:09:06 -- target/tls.sh@98 -- # version=7 00:16:36.037 10:09:06 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:16:36.037 10:09:06 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:36.037 10:09:06 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:36.295 10:09:06 -- target/tls.sh@105 -- # ktls=false 00:16:36.295 10:09:06 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:16:36.295 10:09:06 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:36.554 10:09:06 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:36.554 10:09:06 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:36.554 10:09:07 -- target/tls.sh@113 -- # ktls=true 00:16:36.554 10:09:07 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:16:36.554 10:09:07 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:37.121 10:09:07 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:37.121 10:09:07 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:16:37.121 10:09:07 -- target/tls.sh@121 -- # ktls=false 00:16:37.121 10:09:07 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:16:37.121 10:09:07 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:16:37.121 10:09:07 -- target/tls.sh@49 -- # local key hash crc 00:16:37.121 10:09:07 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:16:37.121 10:09:07 -- target/tls.sh@51 -- # hash=01 00:16:37.121 10:09:07 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:16:37.121 10:09:07 -- target/tls.sh@52 -- # gzip -1 -c 00:16:37.121 10:09:07 -- target/tls.sh@52 -- # tail -c8 00:16:37.121 10:09:07 -- target/tls.sh@52 -- # head -c 4 00:16:37.121 10:09:07 -- target/tls.sh@52 -- # crc='p$H�' 00:16:37.121 10:09:07 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:37.121 10:09:07 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:16:37.121 10:09:07 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:37.121 10:09:07 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:37.121 10:09:07 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:16:37.121 10:09:07 -- target/tls.sh@49 -- # local key hash crc 00:16:37.121 10:09:07 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:16:37.121 10:09:07 -- target/tls.sh@51 -- # hash=01 00:16:37.121 10:09:07 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:16:37.121 10:09:07 -- target/tls.sh@52 -- # gzip -1 -c 00:16:37.121 10:09:07 -- target/tls.sh@52 -- # tail -c8 00:16:37.121 10:09:07 -- target/tls.sh@52 -- # head -c 4 00:16:37.121 10:09:07 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:16:37.121 10:09:07 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:16:37.121 10:09:07 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:16:37.121 10:09:07 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:37.121 10:09:07 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:37.121 10:09:07 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:37.121 10:09:07 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:37.121 10:09:07 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:37.121 10:09:07 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:37.121 10:09:07 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:37.121 10:09:07 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:16:37.380 10:09:07 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:37.380 10:09:07 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:37.947 10:09:08 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:37.947 10:09:08 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:37.947 10:09:08 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:37.947 [2024-07-14 10:09:08.387572] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:37.947 10:09:08 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:38.205 10:09:08 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:16:38.481 [2024-07-14 10:09:08.915743] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:38.481 [2024-07-14 10:09:08.915968] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:38.481 10:09:08 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:38.769 malloc0 00:16:38.769 10:09:09 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:39.034 10:09:09 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:39.034 10:09:09 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:51.235 Initializing NVMe Controllers 00:16:51.235 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:16:51.235 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:51.235 Initialization complete. Launching workers. 00:16:51.235 ======================================================== 00:16:51.235 Latency(us) 00:16:51.235 Device Information : IOPS MiB/s Average min max 00:16:51.235 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11989.09 46.83 5339.08 1532.97 7477.08 00:16:51.235 ======================================================== 00:16:51.235 Total : 11989.09 46.83 5339.08 1532.97 7477.08 00:16:51.235 00:16:51.235 10:09:19 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:51.235 10:09:19 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:51.235 10:09:19 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:51.235 10:09:19 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:51.235 10:09:19 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:16:51.235 10:09:19 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:51.235 10:09:19 -- target/tls.sh@28 -- # bdevperf_pid=88117 00:16:51.235 10:09:19 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:51.235 10:09:19 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:51.235 10:09:19 -- target/tls.sh@31 -- # waitforlisten 88117 /var/tmp/bdevperf.sock 00:16:51.235 10:09:19 -- common/autotest_common.sh@819 -- # '[' -z 88117 ']' 00:16:51.235 10:09:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:51.235 10:09:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:51.235 10:09:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:51.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:51.235 10:09:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:51.235 10:09:19 -- common/autotest_common.sh@10 -- # set +x 00:16:51.235 [2024-07-14 10:09:19.785563] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:16:51.235 [2024-07-14 10:09:19.785657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88117 ] 00:16:51.235 [2024-07-14 10:09:19.927277] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:51.235 [2024-07-14 10:09:20.005565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:51.235 10:09:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:51.235 10:09:20 -- common/autotest_common.sh@852 -- # return 0 00:16:51.235 10:09:20 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:16:51.235 [2024-07-14 10:09:20.905375] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:51.235 TLSTESTn1 00:16:51.235 10:09:20 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:51.235 Running I/O for 10 seconds... 00:17:01.212 00:17:01.212 Latency(us) 00:17:01.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.212 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:01.212 Verification LBA range: start 0x0 length 0x2000 00:17:01.212 TLSTESTn1 : 10.01 6675.09 26.07 0.00 0.00 19146.86 4081.11 18230.92 00:17:01.212 =================================================================================================================== 00:17:01.212 Total : 6675.09 26.07 0.00 0.00 19146.86 4081.11 18230.92 00:17:01.212 0 00:17:01.212 10:09:31 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:01.212 10:09:31 -- target/tls.sh@45 -- # killprocess 88117 00:17:01.212 10:09:31 -- common/autotest_common.sh@926 -- # '[' -z 88117 ']' 00:17:01.212 10:09:31 -- common/autotest_common.sh@930 -- # kill -0 88117 00:17:01.212 10:09:31 -- common/autotest_common.sh@931 -- # uname 00:17:01.212 10:09:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:01.212 10:09:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88117 00:17:01.212 killing process with pid 88117 00:17:01.212 Received shutdown signal, test time was about 10.000000 seconds 00:17:01.212 00:17:01.212 Latency(us) 00:17:01.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:01.212 =================================================================================================================== 00:17:01.212 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:01.212 10:09:31 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:01.213 10:09:31 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:01.213 10:09:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88117' 00:17:01.213 10:09:31 -- common/autotest_common.sh@945 -- # kill 88117 00:17:01.213 10:09:31 -- common/autotest_common.sh@950 -- # wait 88117 00:17:01.213 10:09:31 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:01.213 10:09:31 -- common/autotest_common.sh@640 -- # local es=0 00:17:01.213 10:09:31 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:01.213 10:09:31 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:01.213 10:09:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:01.213 10:09:31 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:01.213 10:09:31 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:01.213 10:09:31 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:01.213 10:09:31 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:01.213 10:09:31 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:01.213 10:09:31 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:01.213 10:09:31 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:17:01.213 10:09:31 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:01.213 10:09:31 -- target/tls.sh@28 -- # bdevperf_pid=88265 00:17:01.213 10:09:31 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:01.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:01.213 10:09:31 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:01.213 10:09:31 -- target/tls.sh@31 -- # waitforlisten 88265 /var/tmp/bdevperf.sock 00:17:01.213 10:09:31 -- common/autotest_common.sh@819 -- # '[' -z 88265 ']' 00:17:01.213 10:09:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:01.213 10:09:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:01.213 10:09:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:01.213 10:09:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:01.213 10:09:31 -- common/autotest_common.sh@10 -- # set +x 00:17:01.213 [2024-07-14 10:09:31.373235] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:01.213 [2024-07-14 10:09:31.373339] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88265 ] 00:17:01.213 [2024-07-14 10:09:31.512261] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.213 [2024-07-14 10:09:31.591781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:01.779 10:09:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:01.779 10:09:32 -- common/autotest_common.sh@852 -- # return 0 00:17:01.779 10:09:32 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:17:02.038 [2024-07-14 10:09:32.512087] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:02.038 [2024-07-14 10:09:32.522056] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:02.038 [2024-07-14 10:09:32.522836] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23219c0 (107): Transport endpoint is not connected 00:17:02.038 [2024-07-14 10:09:32.523841] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23219c0 (9): Bad file descriptor 00:17:02.038 [2024-07-14 10:09:32.524823] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:02.038 [2024-07-14 10:09:32.524858] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:02.038 [2024-07-14 10:09:32.524882] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:02.038 2024/07/14 10:09:32 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:02.038 request: 00:17:02.038 { 00:17:02.038 "method": "bdev_nvme_attach_controller", 00:17:02.038 "params": { 00:17:02.038 "name": "TLSTEST", 00:17:02.038 "trtype": "tcp", 00:17:02.038 "traddr": "10.0.0.2", 00:17:02.038 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:02.038 "adrfam": "ipv4", 00:17:02.039 "trsvcid": "4420", 00:17:02.039 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:02.039 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt" 00:17:02.039 } 00:17:02.039 } 00:17:02.039 Got JSON-RPC error response 00:17:02.039 GoRPCClient: error on JSON-RPC call 00:17:02.039 10:09:32 -- target/tls.sh@36 -- # killprocess 88265 00:17:02.039 10:09:32 -- common/autotest_common.sh@926 -- # '[' -z 88265 ']' 00:17:02.039 10:09:32 -- common/autotest_common.sh@930 -- # kill -0 88265 00:17:02.039 10:09:32 -- common/autotest_common.sh@931 -- # uname 00:17:02.039 10:09:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:02.039 10:09:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88265 00:17:02.039 killing process with pid 88265 00:17:02.039 Received shutdown signal, test time was about 10.000000 seconds 00:17:02.039 00:17:02.039 Latency(us) 00:17:02.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.039 =================================================================================================================== 00:17:02.039 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:02.039 10:09:32 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:02.039 10:09:32 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:02.039 10:09:32 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88265' 00:17:02.039 10:09:32 -- common/autotest_common.sh@945 -- # kill 88265 00:17:02.039 10:09:32 -- common/autotest_common.sh@950 -- # wait 88265 00:17:02.298 10:09:32 -- target/tls.sh@37 -- # return 1 00:17:02.298 10:09:32 -- common/autotest_common.sh@643 -- # es=1 00:17:02.298 10:09:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:02.298 10:09:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:02.298 10:09:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:02.298 10:09:32 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:02.298 10:09:32 -- common/autotest_common.sh@640 -- # local es=0 00:17:02.298 10:09:32 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:02.298 10:09:32 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:02.298 10:09:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:02.298 10:09:32 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:02.298 10:09:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:02.298 10:09:32 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:02.298 10:09:32 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:02.298 10:09:32 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:02.298 10:09:32 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:02.298 10:09:32 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:02.298 10:09:32 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:02.298 10:09:32 -- target/tls.sh@28 -- # bdevperf_pid=88316 00:17:02.298 10:09:32 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:02.298 10:09:32 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:02.298 10:09:32 -- target/tls.sh@31 -- # waitforlisten 88316 /var/tmp/bdevperf.sock 00:17:02.298 10:09:32 -- common/autotest_common.sh@819 -- # '[' -z 88316 ']' 00:17:02.298 10:09:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:02.298 10:09:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:02.298 10:09:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:02.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:02.298 10:09:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:02.298 10:09:32 -- common/autotest_common.sh@10 -- # set +x 00:17:02.298 [2024-07-14 10:09:32.833225] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:02.298 [2024-07-14 10:09:32.833331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88316 ] 00:17:02.557 [2024-07-14 10:09:32.970829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.557 [2024-07-14 10:09:33.027849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:03.492 10:09:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:03.492 10:09:33 -- common/autotest_common.sh@852 -- # return 0 00:17:03.492 10:09:33 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:03.750 [2024-07-14 10:09:34.056829] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:03.750 [2024-07-14 10:09:34.062154] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:03.750 [2024-07-14 10:09:34.062232] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:03.750 [2024-07-14 10:09:34.062285] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:03.750 [2024-07-14 10:09:34.062716] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f879c0 (107): Transport endpoint is not connected 00:17:03.750 [2024-07-14 10:09:34.063677] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f879c0 (9): Bad file descriptor 00:17:03.750 [2024-07-14 10:09:34.064672] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:03.751 [2024-07-14 10:09:34.064709] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:03.751 [2024-07-14 10:09:34.064734] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:03.751 2024/07/14 10:09:34 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:03.751 request: 00:17:03.751 { 00:17:03.751 "method": "bdev_nvme_attach_controller", 00:17:03.751 "params": { 00:17:03.751 "name": "TLSTEST", 00:17:03.751 "trtype": "tcp", 00:17:03.751 "traddr": "10.0.0.2", 00:17:03.751 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:03.751 "adrfam": "ipv4", 00:17:03.751 "trsvcid": "4420", 00:17:03.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:03.751 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:03.751 } 00:17:03.751 } 00:17:03.751 Got JSON-RPC error response 00:17:03.751 GoRPCClient: error on JSON-RPC call 00:17:03.751 10:09:34 -- target/tls.sh@36 -- # killprocess 88316 00:17:03.751 10:09:34 -- common/autotest_common.sh@926 -- # '[' -z 88316 ']' 00:17:03.751 10:09:34 -- common/autotest_common.sh@930 -- # kill -0 88316 00:17:03.751 10:09:34 -- common/autotest_common.sh@931 -- # uname 00:17:03.751 10:09:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:03.751 10:09:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88316 00:17:03.751 killing process with pid 88316 00:17:03.751 Received shutdown signal, test time was about 10.000000 seconds 00:17:03.751 00:17:03.751 Latency(us) 00:17:03.751 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:03.751 =================================================================================================================== 00:17:03.751 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:03.751 10:09:34 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:03.751 10:09:34 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:03.751 10:09:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88316' 00:17:03.751 10:09:34 -- common/autotest_common.sh@945 -- # kill 88316 00:17:03.751 10:09:34 -- common/autotest_common.sh@950 -- # wait 88316 00:17:03.751 10:09:34 -- target/tls.sh@37 -- # return 1 00:17:03.751 10:09:34 -- common/autotest_common.sh@643 -- # es=1 00:17:03.751 10:09:34 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:03.751 10:09:34 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:03.751 10:09:34 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:03.751 10:09:34 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:03.751 10:09:34 -- common/autotest_common.sh@640 -- # local es=0 00:17:03.751 10:09:34 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:03.751 10:09:34 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:03.751 10:09:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:03.751 10:09:34 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:04.009 10:09:34 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:04.009 10:09:34 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:04.009 10:09:34 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:04.010 10:09:34 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:04.010 10:09:34 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:04.010 10:09:34 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:17:04.010 10:09:34 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:04.010 10:09:34 -- target/tls.sh@28 -- # bdevperf_pid=88356 00:17:04.010 10:09:34 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:04.010 10:09:34 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:04.010 10:09:34 -- target/tls.sh@31 -- # waitforlisten 88356 /var/tmp/bdevperf.sock 00:17:04.010 10:09:34 -- common/autotest_common.sh@819 -- # '[' -z 88356 ']' 00:17:04.010 10:09:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:04.010 10:09:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:04.010 10:09:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:04.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:04.010 10:09:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:04.010 10:09:34 -- common/autotest_common.sh@10 -- # set +x 00:17:04.010 [2024-07-14 10:09:34.337649] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:04.010 [2024-07-14 10:09:34.337920] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88356 ] 00:17:04.010 [2024-07-14 10:09:34.465615] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:04.010 [2024-07-14 10:09:34.532855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:04.948 10:09:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:04.948 10:09:35 -- common/autotest_common.sh@852 -- # return 0 00:17:04.948 10:09:35 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:17:04.948 [2024-07-14 10:09:35.450433] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:04.948 [2024-07-14 10:09:35.462293] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:04.948 [2024-07-14 10:09:35.462347] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:04.948 [2024-07-14 10:09:35.462411] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:04.948 [2024-07-14 10:09:35.463113] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b19c0 (107): Transport endpoint is not connected 00:17:04.948 [2024-07-14 10:09:35.464104] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19b19c0 (9): Bad file descriptor 00:17:04.948 [2024-07-14 10:09:35.465100] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:04.948 [2024-07-14 10:09:35.465136] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:04.948 [2024-07-14 10:09:35.465162] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:04.948 2024/07/14 10:09:35 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:04.948 request: 00:17:04.948 { 00:17:04.948 "method": "bdev_nvme_attach_controller", 00:17:04.948 "params": { 00:17:04.948 "name": "TLSTEST", 00:17:04.948 "trtype": "tcp", 00:17:04.948 "traddr": "10.0.0.2", 00:17:04.948 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:04.948 "adrfam": "ipv4", 00:17:04.948 "trsvcid": "4420", 00:17:04.948 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:04.948 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt" 00:17:04.948 } 00:17:04.948 } 00:17:04.948 Got JSON-RPC error response 00:17:04.948 GoRPCClient: error on JSON-RPC call 00:17:04.948 10:09:35 -- target/tls.sh@36 -- # killprocess 88356 00:17:04.948 10:09:35 -- common/autotest_common.sh@926 -- # '[' -z 88356 ']' 00:17:04.948 10:09:35 -- common/autotest_common.sh@930 -- # kill -0 88356 00:17:04.948 10:09:35 -- common/autotest_common.sh@931 -- # uname 00:17:04.948 10:09:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:04.948 10:09:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88356 00:17:05.207 killing process with pid 88356 00:17:05.207 Received shutdown signal, test time was about 10.000000 seconds 00:17:05.207 00:17:05.207 Latency(us) 00:17:05.207 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.207 =================================================================================================================== 00:17:05.207 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:05.207 10:09:35 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:05.207 10:09:35 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:05.207 10:09:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88356' 00:17:05.207 10:09:35 -- common/autotest_common.sh@945 -- # kill 88356 00:17:05.207 10:09:35 -- common/autotest_common.sh@950 -- # wait 88356 00:17:05.207 10:09:35 -- target/tls.sh@37 -- # return 1 00:17:05.207 10:09:35 -- common/autotest_common.sh@643 -- # es=1 00:17:05.207 10:09:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:05.207 10:09:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:05.207 10:09:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:05.207 10:09:35 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:05.207 10:09:35 -- common/autotest_common.sh@640 -- # local es=0 00:17:05.207 10:09:35 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:05.207 10:09:35 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:05.207 10:09:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:05.207 10:09:35 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:05.208 10:09:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:05.208 10:09:35 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:05.208 10:09:35 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:05.208 10:09:35 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:05.208 10:09:35 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:05.208 10:09:35 -- target/tls.sh@23 -- # psk= 00:17:05.208 10:09:35 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:05.208 10:09:35 -- target/tls.sh@28 -- # bdevperf_pid=88396 00:17:05.208 10:09:35 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:05.208 10:09:35 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:05.208 10:09:35 -- target/tls.sh@31 -- # waitforlisten 88396 /var/tmp/bdevperf.sock 00:17:05.208 10:09:35 -- common/autotest_common.sh@819 -- # '[' -z 88396 ']' 00:17:05.208 10:09:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:05.208 10:09:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:05.208 10:09:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:05.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:05.208 10:09:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:05.208 10:09:35 -- common/autotest_common.sh@10 -- # set +x 00:17:05.208 [2024-07-14 10:09:35.749381] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:05.208 [2024-07-14 10:09:35.749496] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88396 ] 00:17:05.467 [2024-07-14 10:09:35.890556] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.467 [2024-07-14 10:09:35.957089] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.405 10:09:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:06.405 10:09:36 -- common/autotest_common.sh@852 -- # return 0 00:17:06.405 10:09:36 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:06.405 [2024-07-14 10:09:36.822559] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:06.405 [2024-07-14 10:09:36.824167] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc8b5e0 (9): Bad file descriptor 00:17:06.405 [2024-07-14 10:09:36.825162] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:06.405 [2024-07-14 10:09:36.825207] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:06.405 [2024-07-14 10:09:36.825235] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:06.405 2024/07/14 10:09:36 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:17:06.405 request: 00:17:06.406 { 00:17:06.406 "method": "bdev_nvme_attach_controller", 00:17:06.406 "params": { 00:17:06.406 "name": "TLSTEST", 00:17:06.406 "trtype": "tcp", 00:17:06.406 "traddr": "10.0.0.2", 00:17:06.406 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:06.406 "adrfam": "ipv4", 00:17:06.406 "trsvcid": "4420", 00:17:06.406 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:17:06.406 } 00:17:06.406 } 00:17:06.406 Got JSON-RPC error response 00:17:06.406 GoRPCClient: error on JSON-RPC call 00:17:06.406 10:09:36 -- target/tls.sh@36 -- # killprocess 88396 00:17:06.406 10:09:36 -- common/autotest_common.sh@926 -- # '[' -z 88396 ']' 00:17:06.406 10:09:36 -- common/autotest_common.sh@930 -- # kill -0 88396 00:17:06.406 10:09:36 -- common/autotest_common.sh@931 -- # uname 00:17:06.406 10:09:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:06.406 10:09:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88396 00:17:06.406 killing process with pid 88396 00:17:06.406 Received shutdown signal, test time was about 10.000000 seconds 00:17:06.406 00:17:06.406 Latency(us) 00:17:06.406 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:06.406 =================================================================================================================== 00:17:06.406 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:06.406 10:09:36 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:06.406 10:09:36 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:06.406 10:09:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88396' 00:17:06.406 10:09:36 -- common/autotest_common.sh@945 -- # kill 88396 00:17:06.406 10:09:36 -- common/autotest_common.sh@950 -- # wait 88396 00:17:06.664 10:09:37 -- target/tls.sh@37 -- # return 1 00:17:06.664 10:09:37 -- common/autotest_common.sh@643 -- # es=1 00:17:06.664 10:09:37 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:06.665 10:09:37 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:06.665 10:09:37 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:06.665 10:09:37 -- target/tls.sh@167 -- # killprocess 87751 00:17:06.665 10:09:37 -- common/autotest_common.sh@926 -- # '[' -z 87751 ']' 00:17:06.665 10:09:37 -- common/autotest_common.sh@930 -- # kill -0 87751 00:17:06.665 10:09:37 -- common/autotest_common.sh@931 -- # uname 00:17:06.665 10:09:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:06.665 10:09:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 87751 00:17:06.665 killing process with pid 87751 00:17:06.665 10:09:37 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:06.665 10:09:37 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:06.665 10:09:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 87751' 00:17:06.665 10:09:37 -- common/autotest_common.sh@945 -- # kill 87751 00:17:06.665 10:09:37 -- common/autotest_common.sh@950 -- # wait 87751 00:17:06.924 10:09:37 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:17:06.924 10:09:37 -- target/tls.sh@49 -- # local key hash crc 00:17:06.924 10:09:37 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:06.924 10:09:37 -- target/tls.sh@51 -- # hash=02 00:17:06.924 10:09:37 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:17:06.924 10:09:37 -- target/tls.sh@52 -- # gzip -1 -c 00:17:06.924 10:09:37 -- target/tls.sh@52 -- # tail -c8 00:17:06.924 10:09:37 -- target/tls.sh@52 -- # head -c 4 00:17:06.924 10:09:37 -- target/tls.sh@52 -- # crc='�e�'\''' 00:17:06.924 10:09:37 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:17:06.924 10:09:37 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:17:06.924 10:09:37 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:06.924 10:09:37 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:06.924 10:09:37 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:06.924 10:09:37 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:06.924 10:09:37 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:06.924 10:09:37 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:17:06.924 10:09:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:06.924 10:09:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:06.924 10:09:37 -- common/autotest_common.sh@10 -- # set +x 00:17:06.924 10:09:37 -- nvmf/common.sh@469 -- # nvmfpid=88462 00:17:06.924 10:09:37 -- nvmf/common.sh@470 -- # waitforlisten 88462 00:17:06.924 10:09:37 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:06.924 10:09:37 -- common/autotest_common.sh@819 -- # '[' -z 88462 ']' 00:17:06.924 10:09:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.924 10:09:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:06.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.924 10:09:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.924 10:09:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:06.924 10:09:37 -- common/autotest_common.sh@10 -- # set +x 00:17:06.924 [2024-07-14 10:09:37.383315] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:06.924 [2024-07-14 10:09:37.383441] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:07.183 [2024-07-14 10:09:37.518674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.183 [2024-07-14 10:09:37.570822] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:07.183 [2024-07-14 10:09:37.570975] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:07.183 [2024-07-14 10:09:37.570988] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:07.183 [2024-07-14 10:09:37.570995] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:07.183 [2024-07-14 10:09:37.571018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.750 10:09:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:07.750 10:09:38 -- common/autotest_common.sh@852 -- # return 0 00:17:07.750 10:09:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:07.750 10:09:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:07.750 10:09:38 -- common/autotest_common.sh@10 -- # set +x 00:17:07.750 10:09:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.750 10:09:38 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:07.750 10:09:38 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:07.750 10:09:38 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:08.009 [2024-07-14 10:09:38.543596] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:08.268 10:09:38 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:08.268 10:09:38 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:08.527 [2024-07-14 10:09:38.923662] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:08.527 [2024-07-14 10:09:38.923851] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:08.527 10:09:38 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:08.786 malloc0 00:17:08.786 10:09:39 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:09.045 10:09:39 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:09.045 10:09:39 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:09.045 10:09:39 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:09.045 10:09:39 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:09.045 10:09:39 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:09.045 10:09:39 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:09.045 10:09:39 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:09.045 10:09:39 -- target/tls.sh@28 -- # bdevperf_pid=88558 00:17:09.045 10:09:39 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:09.045 10:09:39 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:09.045 10:09:39 -- target/tls.sh@31 -- # waitforlisten 88558 /var/tmp/bdevperf.sock 00:17:09.045 10:09:39 -- common/autotest_common.sh@819 -- # '[' -z 88558 ']' 00:17:09.045 10:09:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:09.045 10:09:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:09.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:09.045 10:09:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:09.045 10:09:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:09.045 10:09:39 -- common/autotest_common.sh@10 -- # set +x 00:17:09.304 [2024-07-14 10:09:39.624495] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:09.304 [2024-07-14 10:09:39.624609] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88558 ] 00:17:09.304 [2024-07-14 10:09:39.764422] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.304 [2024-07-14 10:09:39.823169] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:10.248 10:09:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:10.248 10:09:40 -- common/autotest_common.sh@852 -- # return 0 00:17:10.248 10:09:40 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:10.508 [2024-07-14 10:09:40.810015] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:10.508 TLSTESTn1 00:17:10.508 10:09:40 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:10.508 Running I/O for 10 seconds... 00:17:20.485 00:17:20.485 Latency(us) 00:17:20.485 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.485 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:20.485 Verification LBA range: start 0x0 length 0x2000 00:17:20.485 TLSTESTn1 : 10.02 6265.50 24.47 0.00 0.00 20393.54 3872.58 19065.02 00:17:20.485 =================================================================================================================== 00:17:20.485 Total : 6265.50 24.47 0.00 0.00 20393.54 3872.58 19065.02 00:17:20.485 0 00:17:20.485 10:09:51 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:20.485 10:09:51 -- target/tls.sh@45 -- # killprocess 88558 00:17:20.485 10:09:51 -- common/autotest_common.sh@926 -- # '[' -z 88558 ']' 00:17:20.485 10:09:51 -- common/autotest_common.sh@930 -- # kill -0 88558 00:17:20.485 10:09:51 -- common/autotest_common.sh@931 -- # uname 00:17:20.744 10:09:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:20.744 10:09:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88558 00:17:20.744 killing process with pid 88558 00:17:20.744 Received shutdown signal, test time was about 10.000000 seconds 00:17:20.744 00:17:20.744 Latency(us) 00:17:20.744 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:20.744 =================================================================================================================== 00:17:20.744 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:20.744 10:09:51 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:20.744 10:09:51 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:20.744 10:09:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88558' 00:17:20.744 10:09:51 -- common/autotest_common.sh@945 -- # kill 88558 00:17:20.744 10:09:51 -- common/autotest_common.sh@950 -- # wait 88558 00:17:20.744 10:09:51 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:20.744 10:09:51 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:20.744 10:09:51 -- common/autotest_common.sh@640 -- # local es=0 00:17:20.744 10:09:51 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:20.744 10:09:51 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:17:20.744 10:09:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:20.744 10:09:51 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:17:20.744 10:09:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:20.744 10:09:51 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:20.744 10:09:51 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:20.744 10:09:51 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:20.744 10:09:51 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:20.744 10:09:51 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:17:20.744 10:09:51 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:20.744 10:09:51 -- target/tls.sh@28 -- # bdevperf_pid=88712 00:17:20.744 10:09:51 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:20.744 10:09:51 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:20.744 10:09:51 -- target/tls.sh@31 -- # waitforlisten 88712 /var/tmp/bdevperf.sock 00:17:20.744 10:09:51 -- common/autotest_common.sh@819 -- # '[' -z 88712 ']' 00:17:20.744 10:09:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:20.744 10:09:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:20.744 10:09:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:20.744 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:20.744 10:09:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:20.744 10:09:51 -- common/autotest_common.sh@10 -- # set +x 00:17:21.002 [2024-07-14 10:09:51.303857] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:21.002 [2024-07-14 10:09:51.303959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88712 ] 00:17:21.002 [2024-07-14 10:09:51.441904] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.003 [2024-07-14 10:09:51.500190] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.938 10:09:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:21.938 10:09:52 -- common/autotest_common.sh@852 -- # return 0 00:17:21.938 10:09:52 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:22.196 [2024-07-14 10:09:52.516769] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:22.196 [2024-07-14 10:09:52.516817] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:22.196 2024/07/14 10:09:52 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-22 Msg=Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:22.196 request: 00:17:22.196 { 00:17:22.196 "method": "bdev_nvme_attach_controller", 00:17:22.196 "params": { 00:17:22.196 "name": "TLSTEST", 00:17:22.196 "trtype": "tcp", 00:17:22.196 "traddr": "10.0.0.2", 00:17:22.196 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:22.196 "adrfam": "ipv4", 00:17:22.196 "trsvcid": "4420", 00:17:22.196 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:22.196 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:22.196 } 00:17:22.196 } 00:17:22.196 Got JSON-RPC error response 00:17:22.196 GoRPCClient: error on JSON-RPC call 00:17:22.196 10:09:52 -- target/tls.sh@36 -- # killprocess 88712 00:17:22.196 10:09:52 -- common/autotest_common.sh@926 -- # '[' -z 88712 ']' 00:17:22.196 10:09:52 -- common/autotest_common.sh@930 -- # kill -0 88712 00:17:22.196 10:09:52 -- common/autotest_common.sh@931 -- # uname 00:17:22.196 10:09:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:22.196 10:09:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88712 00:17:22.196 10:09:52 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:22.196 killing process with pid 88712 00:17:22.196 Received shutdown signal, test time was about 10.000000 seconds 00:17:22.196 00:17:22.196 Latency(us) 00:17:22.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:22.196 =================================================================================================================== 00:17:22.196 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:22.196 10:09:52 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:22.196 10:09:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88712' 00:17:22.196 10:09:52 -- common/autotest_common.sh@945 -- # kill 88712 00:17:22.196 10:09:52 -- common/autotest_common.sh@950 -- # wait 88712 00:17:22.455 10:09:52 -- target/tls.sh@37 -- # return 1 00:17:22.455 10:09:52 -- common/autotest_common.sh@643 -- # es=1 00:17:22.455 10:09:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:22.455 10:09:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:22.455 10:09:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:22.455 10:09:52 -- target/tls.sh@183 -- # killprocess 88462 00:17:22.455 10:09:52 -- common/autotest_common.sh@926 -- # '[' -z 88462 ']' 00:17:22.455 10:09:52 -- common/autotest_common.sh@930 -- # kill -0 88462 00:17:22.455 10:09:52 -- common/autotest_common.sh@931 -- # uname 00:17:22.455 10:09:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:22.455 10:09:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88462 00:17:22.455 10:09:52 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:22.455 killing process with pid 88462 00:17:22.455 10:09:52 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:22.455 10:09:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88462' 00:17:22.455 10:09:52 -- common/autotest_common.sh@945 -- # kill 88462 00:17:22.455 10:09:52 -- common/autotest_common.sh@950 -- # wait 88462 00:17:22.455 10:09:52 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:17:22.455 10:09:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:22.455 10:09:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:22.455 10:09:52 -- common/autotest_common.sh@10 -- # set +x 00:17:22.455 10:09:52 -- nvmf/common.sh@469 -- # nvmfpid=88757 00:17:22.455 10:09:52 -- nvmf/common.sh@470 -- # waitforlisten 88757 00:17:22.455 10:09:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:22.455 10:09:52 -- common/autotest_common.sh@819 -- # '[' -z 88757 ']' 00:17:22.455 10:09:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:22.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:22.455 10:09:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:22.455 10:09:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:22.455 10:09:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:22.455 10:09:52 -- common/autotest_common.sh@10 -- # set +x 00:17:22.714 [2024-07-14 10:09:53.039718] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:22.714 [2024-07-14 10:09:53.039816] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:22.714 [2024-07-14 10:09:53.177344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:22.714 [2024-07-14 10:09:53.228517] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:22.714 [2024-07-14 10:09:53.228662] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:22.714 [2024-07-14 10:09:53.228674] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:22.714 [2024-07-14 10:09:53.228682] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:22.714 [2024-07-14 10:09:53.228705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.647 10:09:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:23.647 10:09:53 -- common/autotest_common.sh@852 -- # return 0 00:17:23.647 10:09:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:23.647 10:09:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:23.647 10:09:53 -- common/autotest_common.sh@10 -- # set +x 00:17:23.647 10:09:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:23.647 10:09:53 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:23.647 10:09:53 -- common/autotest_common.sh@640 -- # local es=0 00:17:23.647 10:09:53 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:23.647 10:09:53 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:17:23.647 10:09:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:23.647 10:09:53 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:17:23.647 10:09:53 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:23.647 10:09:53 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:23.647 10:09:53 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:23.647 10:09:53 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:23.647 [2024-07-14 10:09:54.190625] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:23.906 10:09:54 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:23.906 10:09:54 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:24.165 [2024-07-14 10:09:54.638688] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:24.165 [2024-07-14 10:09:54.638930] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:24.165 10:09:54 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:24.424 malloc0 00:17:24.424 10:09:54 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:24.684 10:09:55 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:24.684 [2024-07-14 10:09:55.197961] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:24.684 [2024-07-14 10:09:55.197989] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:17:24.684 [2024-07-14 10:09:55.198020] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:17:24.684 2024/07/14 10:09:55 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:17:24.684 request: 00:17:24.684 { 00:17:24.684 "method": "nvmf_subsystem_add_host", 00:17:24.684 "params": { 00:17:24.684 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:24.684 "host": "nqn.2016-06.io.spdk:host1", 00:17:24.684 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:24.684 } 00:17:24.684 } 00:17:24.684 Got JSON-RPC error response 00:17:24.684 GoRPCClient: error on JSON-RPC call 00:17:24.684 10:09:55 -- common/autotest_common.sh@643 -- # es=1 00:17:24.684 10:09:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:24.684 10:09:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:24.684 10:09:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:24.684 10:09:55 -- target/tls.sh@189 -- # killprocess 88757 00:17:24.684 10:09:55 -- common/autotest_common.sh@926 -- # '[' -z 88757 ']' 00:17:24.684 10:09:55 -- common/autotest_common.sh@930 -- # kill -0 88757 00:17:24.684 10:09:55 -- common/autotest_common.sh@931 -- # uname 00:17:24.684 10:09:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:24.684 10:09:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88757 00:17:24.943 10:09:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:24.943 10:09:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:24.943 killing process with pid 88757 00:17:24.943 10:09:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88757' 00:17:24.943 10:09:55 -- common/autotest_common.sh@945 -- # kill 88757 00:17:24.943 10:09:55 -- common/autotest_common.sh@950 -- # wait 88757 00:17:24.943 10:09:55 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:24.943 10:09:55 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:17:24.943 10:09:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:24.943 10:09:55 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:24.943 10:09:55 -- common/autotest_common.sh@10 -- # set +x 00:17:24.943 10:09:55 -- nvmf/common.sh@469 -- # nvmfpid=88868 00:17:24.943 10:09:55 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:24.943 10:09:55 -- nvmf/common.sh@470 -- # waitforlisten 88868 00:17:24.943 10:09:55 -- common/autotest_common.sh@819 -- # '[' -z 88868 ']' 00:17:24.943 10:09:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.943 10:09:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:24.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.944 10:09:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.944 10:09:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:24.944 10:09:55 -- common/autotest_common.sh@10 -- # set +x 00:17:25.203 [2024-07-14 10:09:55.496362] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:25.203 [2024-07-14 10:09:55.496476] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.203 [2024-07-14 10:09:55.634188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:25.203 [2024-07-14 10:09:55.694527] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:25.203 [2024-07-14 10:09:55.694706] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:25.203 [2024-07-14 10:09:55.694734] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:25.203 [2024-07-14 10:09:55.694742] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:25.203 [2024-07-14 10:09:55.694765] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:26.140 10:09:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:26.140 10:09:56 -- common/autotest_common.sh@852 -- # return 0 00:17:26.140 10:09:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:26.140 10:09:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:26.140 10:09:56 -- common/autotest_common.sh@10 -- # set +x 00:17:26.140 10:09:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:26.140 10:09:56 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:26.140 10:09:56 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:26.140 10:09:56 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:26.140 [2024-07-14 10:09:56.665175] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:26.140 10:09:56 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:26.399 10:09:56 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:26.658 [2024-07-14 10:09:57.121306] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:26.658 [2024-07-14 10:09:57.121527] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:26.658 10:09:57 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:26.917 malloc0 00:17:26.917 10:09:57 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:27.176 10:09:57 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:27.435 10:09:57 -- target/tls.sh@197 -- # bdevperf_pid=88965 00:17:27.435 10:09:57 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:27.435 10:09:57 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:27.435 10:09:57 -- target/tls.sh@200 -- # waitforlisten 88965 /var/tmp/bdevperf.sock 00:17:27.435 10:09:57 -- common/autotest_common.sh@819 -- # '[' -z 88965 ']' 00:17:27.435 10:09:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:27.435 10:09:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:27.435 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:27.435 10:09:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:27.435 10:09:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:27.435 10:09:57 -- common/autotest_common.sh@10 -- # set +x 00:17:27.435 [2024-07-14 10:09:57.832493] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:27.435 [2024-07-14 10:09:57.832599] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88965 ] 00:17:27.435 [2024-07-14 10:09:57.968552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.694 [2024-07-14 10:09:58.028105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:28.260 10:09:58 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:28.260 10:09:58 -- common/autotest_common.sh@852 -- # return 0 00:17:28.260 10:09:58 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:28.519 [2024-07-14 10:09:59.011790] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:28.779 TLSTESTn1 00:17:28.779 10:09:59 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:29.038 10:09:59 -- target/tls.sh@205 -- # tgtconf='{ 00:17:29.038 "subsystems": [ 00:17:29.038 { 00:17:29.038 "subsystem": "iobuf", 00:17:29.038 "config": [ 00:17:29.038 { 00:17:29.038 "method": "iobuf_set_options", 00:17:29.038 "params": { 00:17:29.038 "large_bufsize": 135168, 00:17:29.038 "large_pool_count": 1024, 00:17:29.038 "small_bufsize": 8192, 00:17:29.038 "small_pool_count": 8192 00:17:29.038 } 00:17:29.038 } 00:17:29.038 ] 00:17:29.038 }, 00:17:29.038 { 00:17:29.038 "subsystem": "sock", 00:17:29.038 "config": [ 00:17:29.038 { 00:17:29.038 "method": "sock_impl_set_options", 00:17:29.038 "params": { 00:17:29.038 "enable_ktls": false, 00:17:29.038 "enable_placement_id": 0, 00:17:29.038 "enable_quickack": false, 00:17:29.038 "enable_recv_pipe": true, 00:17:29.038 "enable_zerocopy_send_client": false, 00:17:29.038 "enable_zerocopy_send_server": true, 00:17:29.038 "impl_name": "posix", 00:17:29.038 "recv_buf_size": 2097152, 00:17:29.038 "send_buf_size": 2097152, 00:17:29.038 "tls_version": 0, 00:17:29.038 "zerocopy_threshold": 0 00:17:29.038 } 00:17:29.038 }, 00:17:29.038 { 00:17:29.038 "method": "sock_impl_set_options", 00:17:29.038 "params": { 00:17:29.038 "enable_ktls": false, 00:17:29.038 "enable_placement_id": 0, 00:17:29.038 "enable_quickack": false, 00:17:29.038 "enable_recv_pipe": true, 00:17:29.038 "enable_zerocopy_send_client": false, 00:17:29.038 "enable_zerocopy_send_server": true, 00:17:29.038 "impl_name": "ssl", 00:17:29.038 "recv_buf_size": 4096, 00:17:29.038 "send_buf_size": 4096, 00:17:29.038 "tls_version": 0, 00:17:29.038 "zerocopy_threshold": 0 00:17:29.038 } 00:17:29.038 } 00:17:29.038 ] 00:17:29.038 }, 00:17:29.038 { 00:17:29.038 "subsystem": "vmd", 00:17:29.038 "config": [] 00:17:29.038 }, 00:17:29.038 { 00:17:29.038 "subsystem": "accel", 00:17:29.038 "config": [ 00:17:29.038 { 00:17:29.038 "method": "accel_set_options", 00:17:29.038 "params": { 00:17:29.038 "buf_count": 2048, 00:17:29.038 "large_cache_size": 16, 00:17:29.038 "sequence_count": 2048, 00:17:29.038 "small_cache_size": 128, 00:17:29.038 "task_count": 2048 00:17:29.038 } 00:17:29.038 } 00:17:29.039 ] 00:17:29.039 }, 00:17:29.039 { 00:17:29.039 "subsystem": "bdev", 00:17:29.039 "config": [ 00:17:29.039 { 00:17:29.039 "method": "bdev_set_options", 00:17:29.039 "params": { 00:17:29.039 "bdev_auto_examine": true, 00:17:29.039 "bdev_io_cache_size": 256, 00:17:29.039 "bdev_io_pool_size": 65535, 00:17:29.039 "iobuf_large_cache_size": 16, 00:17:29.039 "iobuf_small_cache_size": 128 00:17:29.039 } 00:17:29.039 }, 00:17:29.039 { 00:17:29.039 "method": "bdev_raid_set_options", 00:17:29.039 "params": { 00:17:29.039 "process_window_size_kb": 1024 00:17:29.039 } 00:17:29.039 }, 00:17:29.039 { 00:17:29.039 "method": "bdev_iscsi_set_options", 00:17:29.039 "params": { 00:17:29.039 "timeout_sec": 30 00:17:29.039 } 00:17:29.039 }, 00:17:29.039 { 00:17:29.039 "method": "bdev_nvme_set_options", 00:17:29.039 "params": { 00:17:29.039 "action_on_timeout": "none", 00:17:29.039 "allow_accel_sequence": false, 00:17:29.039 "arbitration_burst": 0, 00:17:29.039 "bdev_retry_count": 3, 00:17:29.039 "ctrlr_loss_timeout_sec": 0, 00:17:29.039 "delay_cmd_submit": true, 00:17:29.039 "fast_io_fail_timeout_sec": 0, 00:17:29.039 "generate_uuids": false, 00:17:29.039 "high_priority_weight": 0, 00:17:29.039 "io_path_stat": false, 00:17:29.039 "io_queue_requests": 0, 00:17:29.039 "keep_alive_timeout_ms": 10000, 00:17:29.039 "low_priority_weight": 0, 00:17:29.039 "medium_priority_weight": 0, 00:17:29.039 "nvme_adminq_poll_period_us": 10000, 00:17:29.039 "nvme_ioq_poll_period_us": 0, 00:17:29.039 "reconnect_delay_sec": 0, 00:17:29.039 "timeout_admin_us": 0, 00:17:29.039 "timeout_us": 0, 00:17:29.039 "transport_ack_timeout": 0, 00:17:29.039 "transport_retry_count": 4, 00:17:29.039 "transport_tos": 0 00:17:29.039 } 00:17:29.039 }, 00:17:29.039 { 00:17:29.039 "method": "bdev_nvme_set_hotplug", 00:17:29.039 "params": { 00:17:29.039 "enable": false, 00:17:29.039 "period_us": 100000 00:17:29.039 } 00:17:29.039 }, 00:17:29.039 { 00:17:29.039 "method": "bdev_malloc_create", 00:17:29.039 "params": { 00:17:29.039 "block_size": 4096, 00:17:29.039 "name": "malloc0", 00:17:29.039 "num_blocks": 8192, 00:17:29.039 "optimal_io_boundary": 0, 00:17:29.039 "physical_block_size": 4096, 00:17:29.039 "uuid": "3024bda1-bde4-41e8-a0a5-fa712037dfb3" 00:17:29.039 } 00:17:29.039 }, 00:17:29.039 { 00:17:29.039 "method": "bdev_wait_for_examine" 00:17:29.039 } 00:17:29.039 ] 00:17:29.039 }, 00:17:29.039 { 00:17:29.039 "subsystem": "nbd", 00:17:29.039 "config": [] 00:17:29.039 }, 00:17:29.039 { 00:17:29.039 "subsystem": "scheduler", 00:17:29.039 "config": [ 00:17:29.039 { 00:17:29.039 "method": "framework_set_scheduler", 00:17:29.039 "params": { 00:17:29.039 "name": "static" 00:17:29.039 } 00:17:29.039 } 00:17:29.039 ] 00:17:29.039 }, 00:17:29.039 { 00:17:29.039 "subsystem": "nvmf", 00:17:29.039 "config": [ 00:17:29.039 { 00:17:29.039 "method": "nvmf_set_config", 00:17:29.039 "params": { 00:17:29.039 "admin_cmd_passthru": { 00:17:29.039 "identify_ctrlr": false 00:17:29.039 }, 00:17:29.039 "discovery_filter": "match_any" 00:17:29.039 } 00:17:29.039 }, 00:17:29.039 { 00:17:29.039 "method": "nvmf_set_max_subsystems", 00:17:29.039 "params": { 00:17:29.039 "max_subsystems": 1024 00:17:29.039 } 00:17:29.039 }, 00:17:29.039 { 00:17:29.039 "method": "nvmf_set_crdt", 00:17:29.039 "params": { 00:17:29.039 "crdt1": 0, 00:17:29.039 "crdt2": 0, 00:17:29.039 "crdt3": 0 00:17:29.039 } 00:17:29.039 }, 00:17:29.039 { 00:17:29.039 "method": "nvmf_create_transport", 00:17:29.039 "params": { 00:17:29.039 "abort_timeout_sec": 1, 00:17:29.039 "buf_cache_size": 4294967295, 00:17:29.039 "c2h_success": false, 00:17:29.039 "dif_insert_or_strip": false, 00:17:29.039 "in_capsule_data_size": 4096, 00:17:29.039 "io_unit_size": 131072, 00:17:29.039 "max_aq_depth": 128, 00:17:29.039 "max_io_qpairs_per_ctrlr": 127, 00:17:29.039 "max_io_size": 131072, 00:17:29.039 "max_queue_depth": 128, 00:17:29.039 "num_shared_buffers": 511, 00:17:29.039 "sock_priority": 0, 00:17:29.039 "trtype": "TCP", 00:17:29.039 "zcopy": false 00:17:29.039 } 00:17:29.039 }, 00:17:29.039 { 00:17:29.039 "method": "nvmf_create_subsystem", 00:17:29.039 "params": { 00:17:29.039 "allow_any_host": false, 00:17:29.039 "ana_reporting": false, 00:17:29.039 "max_cntlid": 65519, 00:17:29.039 "max_namespaces": 10, 00:17:29.039 "min_cntlid": 1, 00:17:29.039 "model_number": "SPDK bdev Controller", 00:17:29.039 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.039 "serial_number": "SPDK00000000000001" 00:17:29.039 } 00:17:29.039 }, 00:17:29.039 { 00:17:29.039 "method": "nvmf_subsystem_add_host", 00:17:29.039 "params": { 00:17:29.039 "host": "nqn.2016-06.io.spdk:host1", 00:17:29.039 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.039 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:29.039 } 00:17:29.039 }, 00:17:29.039 { 00:17:29.039 "method": "nvmf_subsystem_add_ns", 00:17:29.039 "params": { 00:17:29.039 "namespace": { 00:17:29.039 "bdev_name": "malloc0", 00:17:29.039 "nguid": "3024BDA1BDE441E8A0A5FA712037DFB3", 00:17:29.039 "nsid": 1, 00:17:29.039 "uuid": "3024bda1-bde4-41e8-a0a5-fa712037dfb3" 00:17:29.039 }, 00:17:29.039 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:29.039 } 00:17:29.039 }, 00:17:29.039 { 00:17:29.039 "method": "nvmf_subsystem_add_listener", 00:17:29.039 "params": { 00:17:29.039 "listen_address": { 00:17:29.039 "adrfam": "IPv4", 00:17:29.039 "traddr": "10.0.0.2", 00:17:29.039 "trsvcid": "4420", 00:17:29.039 "trtype": "TCP" 00:17:29.039 }, 00:17:29.039 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.039 "secure_channel": true 00:17:29.039 } 00:17:29.039 } 00:17:29.039 ] 00:17:29.039 } 00:17:29.039 ] 00:17:29.039 }' 00:17:29.039 10:09:59 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:29.299 10:09:59 -- target/tls.sh@206 -- # bdevperfconf='{ 00:17:29.299 "subsystems": [ 00:17:29.299 { 00:17:29.299 "subsystem": "iobuf", 00:17:29.299 "config": [ 00:17:29.299 { 00:17:29.299 "method": "iobuf_set_options", 00:17:29.299 "params": { 00:17:29.299 "large_bufsize": 135168, 00:17:29.299 "large_pool_count": 1024, 00:17:29.299 "small_bufsize": 8192, 00:17:29.299 "small_pool_count": 8192 00:17:29.299 } 00:17:29.299 } 00:17:29.299 ] 00:17:29.299 }, 00:17:29.299 { 00:17:29.299 "subsystem": "sock", 00:17:29.299 "config": [ 00:17:29.299 { 00:17:29.299 "method": "sock_impl_set_options", 00:17:29.299 "params": { 00:17:29.299 "enable_ktls": false, 00:17:29.299 "enable_placement_id": 0, 00:17:29.299 "enable_quickack": false, 00:17:29.299 "enable_recv_pipe": true, 00:17:29.299 "enable_zerocopy_send_client": false, 00:17:29.299 "enable_zerocopy_send_server": true, 00:17:29.299 "impl_name": "posix", 00:17:29.299 "recv_buf_size": 2097152, 00:17:29.299 "send_buf_size": 2097152, 00:17:29.299 "tls_version": 0, 00:17:29.299 "zerocopy_threshold": 0 00:17:29.299 } 00:17:29.299 }, 00:17:29.299 { 00:17:29.299 "method": "sock_impl_set_options", 00:17:29.299 "params": { 00:17:29.299 "enable_ktls": false, 00:17:29.299 "enable_placement_id": 0, 00:17:29.299 "enable_quickack": false, 00:17:29.299 "enable_recv_pipe": true, 00:17:29.299 "enable_zerocopy_send_client": false, 00:17:29.299 "enable_zerocopy_send_server": true, 00:17:29.299 "impl_name": "ssl", 00:17:29.299 "recv_buf_size": 4096, 00:17:29.299 "send_buf_size": 4096, 00:17:29.299 "tls_version": 0, 00:17:29.299 "zerocopy_threshold": 0 00:17:29.299 } 00:17:29.299 } 00:17:29.299 ] 00:17:29.299 }, 00:17:29.299 { 00:17:29.299 "subsystem": "vmd", 00:17:29.299 "config": [] 00:17:29.299 }, 00:17:29.299 { 00:17:29.299 "subsystem": "accel", 00:17:29.299 "config": [ 00:17:29.299 { 00:17:29.299 "method": "accel_set_options", 00:17:29.299 "params": { 00:17:29.299 "buf_count": 2048, 00:17:29.299 "large_cache_size": 16, 00:17:29.299 "sequence_count": 2048, 00:17:29.299 "small_cache_size": 128, 00:17:29.299 "task_count": 2048 00:17:29.299 } 00:17:29.299 } 00:17:29.299 ] 00:17:29.299 }, 00:17:29.299 { 00:17:29.299 "subsystem": "bdev", 00:17:29.299 "config": [ 00:17:29.299 { 00:17:29.299 "method": "bdev_set_options", 00:17:29.299 "params": { 00:17:29.299 "bdev_auto_examine": true, 00:17:29.299 "bdev_io_cache_size": 256, 00:17:29.299 "bdev_io_pool_size": 65535, 00:17:29.299 "iobuf_large_cache_size": 16, 00:17:29.299 "iobuf_small_cache_size": 128 00:17:29.299 } 00:17:29.299 }, 00:17:29.299 { 00:17:29.299 "method": "bdev_raid_set_options", 00:17:29.299 "params": { 00:17:29.299 "process_window_size_kb": 1024 00:17:29.299 } 00:17:29.299 }, 00:17:29.299 { 00:17:29.299 "method": "bdev_iscsi_set_options", 00:17:29.299 "params": { 00:17:29.299 "timeout_sec": 30 00:17:29.299 } 00:17:29.299 }, 00:17:29.299 { 00:17:29.299 "method": "bdev_nvme_set_options", 00:17:29.299 "params": { 00:17:29.299 "action_on_timeout": "none", 00:17:29.299 "allow_accel_sequence": false, 00:17:29.299 "arbitration_burst": 0, 00:17:29.299 "bdev_retry_count": 3, 00:17:29.299 "ctrlr_loss_timeout_sec": 0, 00:17:29.299 "delay_cmd_submit": true, 00:17:29.299 "fast_io_fail_timeout_sec": 0, 00:17:29.299 "generate_uuids": false, 00:17:29.299 "high_priority_weight": 0, 00:17:29.299 "io_path_stat": false, 00:17:29.299 "io_queue_requests": 512, 00:17:29.299 "keep_alive_timeout_ms": 10000, 00:17:29.299 "low_priority_weight": 0, 00:17:29.299 "medium_priority_weight": 0, 00:17:29.299 "nvme_adminq_poll_period_us": 10000, 00:17:29.299 "nvme_ioq_poll_period_us": 0, 00:17:29.299 "reconnect_delay_sec": 0, 00:17:29.299 "timeout_admin_us": 0, 00:17:29.299 "timeout_us": 0, 00:17:29.299 "transport_ack_timeout": 0, 00:17:29.299 "transport_retry_count": 4, 00:17:29.299 "transport_tos": 0 00:17:29.299 } 00:17:29.299 }, 00:17:29.299 { 00:17:29.299 "method": "bdev_nvme_attach_controller", 00:17:29.299 "params": { 00:17:29.299 "adrfam": "IPv4", 00:17:29.299 "ctrlr_loss_timeout_sec": 0, 00:17:29.299 "ddgst": false, 00:17:29.299 "fast_io_fail_timeout_sec": 0, 00:17:29.299 "hdgst": false, 00:17:29.299 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:29.299 "name": "TLSTEST", 00:17:29.299 "prchk_guard": false, 00:17:29.299 "prchk_reftag": false, 00:17:29.299 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:29.299 "reconnect_delay_sec": 0, 00:17:29.299 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.299 "traddr": "10.0.0.2", 00:17:29.299 "trsvcid": "4420", 00:17:29.299 "trtype": "TCP" 00:17:29.299 } 00:17:29.299 }, 00:17:29.299 { 00:17:29.299 "method": "bdev_nvme_set_hotplug", 00:17:29.299 "params": { 00:17:29.299 "enable": false, 00:17:29.299 "period_us": 100000 00:17:29.299 } 00:17:29.299 }, 00:17:29.299 { 00:17:29.299 "method": "bdev_wait_for_examine" 00:17:29.299 } 00:17:29.299 ] 00:17:29.299 }, 00:17:29.299 { 00:17:29.299 "subsystem": "nbd", 00:17:29.299 "config": [] 00:17:29.299 } 00:17:29.299 ] 00:17:29.299 }' 00:17:29.299 10:09:59 -- target/tls.sh@208 -- # killprocess 88965 00:17:29.299 10:09:59 -- common/autotest_common.sh@926 -- # '[' -z 88965 ']' 00:17:29.299 10:09:59 -- common/autotest_common.sh@930 -- # kill -0 88965 00:17:29.299 10:09:59 -- common/autotest_common.sh@931 -- # uname 00:17:29.299 10:09:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:29.299 10:09:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88965 00:17:29.299 10:09:59 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:29.299 killing process with pid 88965 00:17:29.299 10:09:59 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:29.299 10:09:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88965' 00:17:29.299 10:09:59 -- common/autotest_common.sh@945 -- # kill 88965 00:17:29.299 Received shutdown signal, test time was about 10.000000 seconds 00:17:29.299 00:17:29.299 Latency(us) 00:17:29.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:29.299 =================================================================================================================== 00:17:29.299 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:29.299 10:09:59 -- common/autotest_common.sh@950 -- # wait 88965 00:17:29.559 10:09:59 -- target/tls.sh@209 -- # killprocess 88868 00:17:29.559 10:09:59 -- common/autotest_common.sh@926 -- # '[' -z 88868 ']' 00:17:29.559 10:09:59 -- common/autotest_common.sh@930 -- # kill -0 88868 00:17:29.559 10:09:59 -- common/autotest_common.sh@931 -- # uname 00:17:29.559 10:09:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:29.559 10:09:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 88868 00:17:29.559 10:09:59 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:29.559 10:09:59 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:29.559 killing process with pid 88868 00:17:29.559 10:09:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 88868' 00:17:29.559 10:09:59 -- common/autotest_common.sh@945 -- # kill 88868 00:17:29.559 10:09:59 -- common/autotest_common.sh@950 -- # wait 88868 00:17:29.559 10:10:00 -- target/tls.sh@212 -- # echo '{ 00:17:29.559 "subsystems": [ 00:17:29.559 { 00:17:29.559 "subsystem": "iobuf", 00:17:29.559 "config": [ 00:17:29.559 { 00:17:29.559 "method": "iobuf_set_options", 00:17:29.559 "params": { 00:17:29.559 "large_bufsize": 135168, 00:17:29.559 "large_pool_count": 1024, 00:17:29.559 "small_bufsize": 8192, 00:17:29.559 "small_pool_count": 8192 00:17:29.559 } 00:17:29.559 } 00:17:29.559 ] 00:17:29.559 }, 00:17:29.559 { 00:17:29.559 "subsystem": "sock", 00:17:29.559 "config": [ 00:17:29.559 { 00:17:29.559 "method": "sock_impl_set_options", 00:17:29.559 "params": { 00:17:29.559 "enable_ktls": false, 00:17:29.559 "enable_placement_id": 0, 00:17:29.559 "enable_quickack": false, 00:17:29.559 "enable_recv_pipe": true, 00:17:29.559 "enable_zerocopy_send_client": false, 00:17:29.559 "enable_zerocopy_send_server": true, 00:17:29.559 "impl_name": "posix", 00:17:29.559 "recv_buf_size": 2097152, 00:17:29.559 "send_buf_size": 2097152, 00:17:29.559 "tls_version": 0, 00:17:29.559 "zerocopy_threshold": 0 00:17:29.559 } 00:17:29.559 }, 00:17:29.559 { 00:17:29.559 "method": "sock_impl_set_options", 00:17:29.559 "params": { 00:17:29.559 "enable_ktls": false, 00:17:29.559 "enable_placement_id": 0, 00:17:29.559 "enable_quickack": false, 00:17:29.559 "enable_recv_pipe": true, 00:17:29.559 "enable_zerocopy_send_client": false, 00:17:29.559 "enable_zerocopy_send_server": true, 00:17:29.559 "impl_name": "ssl", 00:17:29.559 "recv_buf_size": 4096, 00:17:29.559 "send_buf_size": 4096, 00:17:29.559 "tls_version": 0, 00:17:29.559 "zerocopy_threshold": 0 00:17:29.559 } 00:17:29.559 } 00:17:29.559 ] 00:17:29.559 }, 00:17:29.559 { 00:17:29.559 "subsystem": "vmd", 00:17:29.559 "config": [] 00:17:29.559 }, 00:17:29.559 { 00:17:29.559 "subsystem": "accel", 00:17:29.559 "config": [ 00:17:29.559 { 00:17:29.559 "method": "accel_set_options", 00:17:29.559 "params": { 00:17:29.559 "buf_count": 2048, 00:17:29.559 "large_cache_size": 16, 00:17:29.559 "sequence_count": 2048, 00:17:29.559 "small_cache_size": 128, 00:17:29.559 "task_count": 2048 00:17:29.559 } 00:17:29.559 } 00:17:29.559 ] 00:17:29.559 }, 00:17:29.559 { 00:17:29.559 "subsystem": "bdev", 00:17:29.559 "config": [ 00:17:29.559 { 00:17:29.559 "method": "bdev_set_options", 00:17:29.559 "params": { 00:17:29.559 "bdev_auto_examine": true, 00:17:29.559 "bdev_io_cache_size": 256, 00:17:29.559 "bdev_io_pool_size": 65535, 00:17:29.559 "iobuf_large_cache_size": 16, 00:17:29.559 "iobuf_small_cache_size": 128 00:17:29.559 } 00:17:29.559 }, 00:17:29.559 { 00:17:29.559 "method": "bdev_raid_set_options", 00:17:29.559 "params": { 00:17:29.559 "process_window_size_kb": 1024 00:17:29.559 } 00:17:29.559 }, 00:17:29.560 { 00:17:29.560 "method": "bdev_iscsi_set_options", 00:17:29.560 "params": { 00:17:29.560 "timeout_sec": 30 00:17:29.560 } 00:17:29.560 }, 00:17:29.560 { 00:17:29.560 "method": "bdev_nvme_set_options", 00:17:29.560 "params": { 00:17:29.560 "action_on_timeout": "none", 00:17:29.560 "allow_accel_sequence": false, 00:17:29.560 "arbitration_burst": 0, 00:17:29.560 "bdev_retry_count": 3, 00:17:29.560 "ctrlr_loss_timeout_sec": 0, 00:17:29.560 "delay_cmd_submit": true, 00:17:29.560 "fast_io_fail_timeout_sec": 0, 00:17:29.560 "generate_uuids": false, 00:17:29.560 "high_priority_weight": 0, 00:17:29.560 "io_path_stat": false, 00:17:29.560 "io_queue_requests": 0, 00:17:29.560 "keep_alive_timeout_ms": 10000, 00:17:29.560 "low_priority_weight": 0, 00:17:29.560 "medium_priority_weight": 0, 00:17:29.560 "nvme_adminq_poll_period_us": 10000, 00:17:29.560 "nvme_ioq_poll_period_us": 0, 00:17:29.560 "reconnect_delay_sec": 0, 00:17:29.560 "timeout_admin_us": 0, 00:17:29.560 "timeout_us": 0, 00:17:29.560 "transport_ack_timeout": 0, 00:17:29.560 "transport_retry_count": 4, 00:17:29.560 "transport_tos": 0 00:17:29.560 } 00:17:29.560 }, 00:17:29.560 { 00:17:29.560 "method": "bdev_nvme_set_hotplug", 00:17:29.560 "params": { 00:17:29.560 "enable": false, 00:17:29.560 "period_us": 100000 00:17:29.560 } 00:17:29.560 }, 00:17:29.560 { 00:17:29.560 "method": "bdev_malloc_create", 00:17:29.560 "params": { 00:17:29.560 "block_size": 4096, 00:17:29.560 "name": "malloc0", 00:17:29.560 "num_blocks": 8192, 00:17:29.560 "optimal_io_boundary": 0, 00:17:29.560 "physical_block_size": 4096, 00:17:29.560 "uuid": "3024bda1-bde4-41e8-a0a5-fa712037dfb3" 00:17:29.560 } 00:17:29.560 }, 00:17:29.560 { 00:17:29.560 "method": "bdev_wait_for_examine" 00:17:29.560 } 00:17:29.560 ] 00:17:29.560 }, 00:17:29.560 { 00:17:29.560 "subsystem": "nbd", 00:17:29.560 "config": [] 00:17:29.560 }, 00:17:29.560 { 00:17:29.560 "subsystem": "sch 10:10:00 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:29.560 eduler", 00:17:29.560 "config": [ 00:17:29.560 { 00:17:29.560 "method": "framework_set_scheduler", 00:17:29.560 "params": { 00:17:29.560 "name": "static" 00:17:29.560 } 00:17:29.560 } 00:17:29.560 ] 00:17:29.560 }, 00:17:29.560 { 00:17:29.560 "subsystem": "nvmf", 00:17:29.560 "config": [ 00:17:29.560 { 00:17:29.560 "method": "nvmf_set_config", 00:17:29.560 "params": { 00:17:29.560 "admin_cmd_passthru": { 00:17:29.560 "identify_ctrlr": false 00:17:29.560 }, 00:17:29.560 "discovery_filter": "match_any" 00:17:29.560 } 00:17:29.560 }, 00:17:29.560 { 00:17:29.560 "method": "nvmf_set_max_subsystems", 00:17:29.560 "params": { 00:17:29.560 "max_subsystems": 1024 00:17:29.560 } 00:17:29.560 }, 00:17:29.560 { 00:17:29.560 "method": "nvmf_set_crdt", 00:17:29.560 "params": { 00:17:29.560 "crdt1": 0, 00:17:29.560 "crdt2": 0, 00:17:29.560 "crdt3": 0 00:17:29.560 } 00:17:29.560 }, 00:17:29.560 { 00:17:29.560 "method": "nvmf_create_transport", 00:17:29.560 "params": { 00:17:29.560 "abort_timeout_sec": 1, 00:17:29.560 "buf_cache_size": 4294967295, 00:17:29.560 "c2h_success": false, 00:17:29.560 "dif_insert_or_strip": false, 00:17:29.560 "in_capsule_data_size": 4096, 00:17:29.560 "io_unit_size": 131072, 00:17:29.560 "max_aq_depth": 128, 00:17:29.560 "max_io_qpairs_per_ctrlr": 127, 00:17:29.560 "max_io_size": 131072, 00:17:29.560 "max_queue_depth": 128, 00:17:29.560 "num_shared_buffers": 511, 00:17:29.560 "sock_priority": 0, 00:17:29.560 "trtype": "TCP", 00:17:29.560 "zcopy": false 00:17:29.560 } 00:17:29.560 }, 00:17:29.560 { 00:17:29.560 "method": "nvmf_create_subsystem", 00:17:29.560 "params": { 00:17:29.560 "allow_any_host": false, 00:17:29.560 "ana_reporting": false, 00:17:29.560 "max_cntlid": 65519, 00:17:29.560 "max_namespaces": 10, 00:17:29.560 "min_cntlid": 1, 00:17:29.560 "model_number": "SPDK bdev Controller", 00:17:29.560 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.560 "serial_number": "SPDK00000000000001" 00:17:29.560 } 00:17:29.560 }, 00:17:29.560 { 00:17:29.560 "method": "nvmf_subsystem_add_host", 00:17:29.560 "params": { 00:17:29.560 "host": "nqn.2016-06.io.spdk:host1", 00:17:29.560 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.560 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:17:29.560 } 00:17:29.560 }, 00:17:29.560 { 00:17:29.560 "method": "nvmf_subsystem_add_ns", 00:17:29.560 "params": { 00:17:29.560 "namespace": { 00:17:29.560 "bdev_name": "malloc0", 00:17:29.560 "nguid": "3024BDA1BDE441E8A0A5FA712037DFB3", 00:17:29.560 "nsid": 1, 00:17:29.560 "uuid": "3024bda1-bde4-41e8-a0a5-fa712037dfb3" 00:17:29.560 }, 00:17:29.560 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:17:29.560 } 00:17:29.560 }, 00:17:29.560 { 00:17:29.560 "method": "nvmf_subsystem_add_listener", 00:17:29.560 "params": { 00:17:29.560 "listen_address": { 00:17:29.560 "adrfam": "IPv4", 00:17:29.560 "traddr": "10.0.0.2", 00:17:29.560 "trsvcid": "4420", 00:17:29.560 "trtype": "TCP" 00:17:29.560 }, 00:17:29.560 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:29.560 "secure_channel": true 00:17:29.560 } 00:17:29.560 } 00:17:29.560 ] 00:17:29.560 } 00:17:29.560 ] 00:17:29.560 }' 00:17:29.560 10:10:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:29.560 10:10:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:29.560 10:10:00 -- common/autotest_common.sh@10 -- # set +x 00:17:29.820 10:10:00 -- nvmf/common.sh@469 -- # nvmfpid=89039 00:17:29.820 10:10:00 -- nvmf/common.sh@470 -- # waitforlisten 89039 00:17:29.820 10:10:00 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:29.820 10:10:00 -- common/autotest_common.sh@819 -- # '[' -z 89039 ']' 00:17:29.820 10:10:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.820 10:10:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:29.820 10:10:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.820 10:10:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:29.820 10:10:00 -- common/autotest_common.sh@10 -- # set +x 00:17:29.820 [2024-07-14 10:10:00.167319] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:29.820 [2024-07-14 10:10:00.167406] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.820 [2024-07-14 10:10:00.307620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.079 [2024-07-14 10:10:00.380087] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:30.079 [2024-07-14 10:10:00.380280] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.079 [2024-07-14 10:10:00.380294] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.079 [2024-07-14 10:10:00.380304] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.079 [2024-07-14 10:10:00.380329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.079 [2024-07-14 10:10:00.590878] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:30.079 [2024-07-14 10:10:00.622832] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:30.079 [2024-07-14 10:10:00.623059] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:30.646 10:10:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:30.646 10:10:01 -- common/autotest_common.sh@852 -- # return 0 00:17:30.646 10:10:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:30.646 10:10:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:30.646 10:10:01 -- common/autotest_common.sh@10 -- # set +x 00:17:30.646 10:10:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.669 10:10:01 -- target/tls.sh@216 -- # bdevperf_pid=89082 00:17:30.669 10:10:01 -- target/tls.sh@217 -- # waitforlisten 89082 /var/tmp/bdevperf.sock 00:17:30.669 10:10:01 -- common/autotest_common.sh@819 -- # '[' -z 89082 ']' 00:17:30.669 10:10:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:30.669 10:10:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:30.669 10:10:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:30.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:30.669 10:10:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:30.669 10:10:01 -- common/autotest_common.sh@10 -- # set +x 00:17:30.669 10:10:01 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:30.669 10:10:01 -- target/tls.sh@213 -- # echo '{ 00:17:30.669 "subsystems": [ 00:17:30.669 { 00:17:30.669 "subsystem": "iobuf", 00:17:30.669 "config": [ 00:17:30.669 { 00:17:30.669 "method": "iobuf_set_options", 00:17:30.669 "params": { 00:17:30.669 "large_bufsize": 135168, 00:17:30.669 "large_pool_count": 1024, 00:17:30.669 "small_bufsize": 8192, 00:17:30.669 "small_pool_count": 8192 00:17:30.669 } 00:17:30.669 } 00:17:30.669 ] 00:17:30.669 }, 00:17:30.669 { 00:17:30.669 "subsystem": "sock", 00:17:30.669 "config": [ 00:17:30.669 { 00:17:30.669 "method": "sock_impl_set_options", 00:17:30.669 "params": { 00:17:30.669 "enable_ktls": false, 00:17:30.669 "enable_placement_id": 0, 00:17:30.669 "enable_quickack": false, 00:17:30.669 "enable_recv_pipe": true, 00:17:30.669 "enable_zerocopy_send_client": false, 00:17:30.669 "enable_zerocopy_send_server": true, 00:17:30.669 "impl_name": "posix", 00:17:30.669 "recv_buf_size": 2097152, 00:17:30.669 "send_buf_size": 2097152, 00:17:30.669 "tls_version": 0, 00:17:30.669 "zerocopy_threshold": 0 00:17:30.669 } 00:17:30.669 }, 00:17:30.669 { 00:17:30.669 "method": "sock_impl_set_options", 00:17:30.669 "params": { 00:17:30.669 "enable_ktls": false, 00:17:30.669 "enable_placement_id": 0, 00:17:30.669 "enable_quickack": false, 00:17:30.669 "enable_recv_pipe": true, 00:17:30.669 "enable_zerocopy_send_client": false, 00:17:30.669 "enable_zerocopy_send_server": true, 00:17:30.669 "impl_name": "ssl", 00:17:30.669 "recv_buf_size": 4096, 00:17:30.669 "send_buf_size": 4096, 00:17:30.669 "tls_version": 0, 00:17:30.669 "zerocopy_threshold": 0 00:17:30.669 } 00:17:30.669 } 00:17:30.669 ] 00:17:30.669 }, 00:17:30.669 { 00:17:30.669 "subsystem": "vmd", 00:17:30.669 "config": [] 00:17:30.669 }, 00:17:30.669 { 00:17:30.669 "subsystem": "accel", 00:17:30.669 "config": [ 00:17:30.669 { 00:17:30.669 "method": "accel_set_options", 00:17:30.669 "params": { 00:17:30.669 "buf_count": 2048, 00:17:30.669 "large_cache_size": 16, 00:17:30.669 "sequence_count": 2048, 00:17:30.669 "small_cache_size": 128, 00:17:30.669 "task_count": 2048 00:17:30.669 } 00:17:30.669 } 00:17:30.669 ] 00:17:30.669 }, 00:17:30.669 { 00:17:30.669 "subsystem": "bdev", 00:17:30.669 "config": [ 00:17:30.669 { 00:17:30.669 "method": "bdev_set_options", 00:17:30.669 "params": { 00:17:30.669 "bdev_auto_examine": true, 00:17:30.669 "bdev_io_cache_size": 256, 00:17:30.669 "bdev_io_pool_size": 65535, 00:17:30.669 "iobuf_large_cache_size": 16, 00:17:30.669 "iobuf_small_cache_size": 128 00:17:30.669 } 00:17:30.669 }, 00:17:30.669 { 00:17:30.669 "method": "bdev_raid_set_options", 00:17:30.669 "params": { 00:17:30.669 "process_window_size_kb": 1024 00:17:30.669 } 00:17:30.669 }, 00:17:30.669 { 00:17:30.669 "method": "bdev_iscsi_set_options", 00:17:30.669 "params": { 00:17:30.669 "timeout_sec": 30 00:17:30.669 } 00:17:30.669 }, 00:17:30.669 { 00:17:30.669 "method": "bdev_nvme_set_options", 00:17:30.669 "params": { 00:17:30.669 "action_on_timeout": "none", 00:17:30.669 "allow_accel_sequence": false, 00:17:30.669 "arbitration_burst": 0, 00:17:30.669 "bdev_retry_count": 3, 00:17:30.669 "ctrlr_loss_timeout_sec": 0, 00:17:30.669 "delay_cmd_submit": true, 00:17:30.669 "fast_io_fail_timeout_sec": 0, 00:17:30.669 "generate_uuids": false, 00:17:30.669 "high_priority_weight": 0, 00:17:30.669 "io_path_stat": false, 00:17:30.669 "io_queue_requests": 512, 00:17:30.669 "keep_alive_timeout_ms": 10000, 00:17:30.669 "low_priority_weight": 0, 00:17:30.669 "medium_priority_weight": 0, 00:17:30.669 "nvme_adminq_poll_period_us": 10000, 00:17:30.669 "nvme_ioq_poll_period_us": 0, 00:17:30.669 "reconnect_delay_sec": 0, 00:17:30.669 "timeout_admin_us": 0, 00:17:30.669 "timeout_us": 0, 00:17:30.669 "transport_ack_timeout": 0, 00:17:30.669 "transport_retry_count": 4, 00:17:30.669 "transport_tos": 0 00:17:30.669 } 00:17:30.670 }, 00:17:30.670 { 00:17:30.670 "method": "bdev_nvme_attach_controller", 00:17:30.670 "params": { 00:17:30.670 "adrfam": "IPv4", 00:17:30.670 "ctrlr_loss_timeout_sec": 0, 00:17:30.670 "ddgst": false, 00:17:30.670 "fast_io_fail_timeout_sec": 0, 00:17:30.670 "hdgst": false, 00:17:30.670 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:30.670 "name": "TLSTEST", 00:17:30.670 "prchk_guard": false, 00:17:30.670 "prchk_reftag": false, 00:17:30.670 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:17:30.670 "reconnect_delay_sec": 0, 00:17:30.670 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:30.670 "traddr": "10.0.0.2", 00:17:30.670 "trsvcid": "4420", 00:17:30.670 "trtype": "TCP" 00:17:30.670 } 00:17:30.670 }, 00:17:30.670 { 00:17:30.670 "method": "bdev_nvme_set_hotplug", 00:17:30.670 "params": { 00:17:30.670 "enable": false, 00:17:30.670 "period_us": 100000 00:17:30.670 } 00:17:30.670 }, 00:17:30.670 { 00:17:30.670 "method": "bdev_wait_for_examine" 00:17:30.670 } 00:17:30.670 ] 00:17:30.670 }, 00:17:30.670 { 00:17:30.670 "subsystem": "nbd", 00:17:30.670 "config": [] 00:17:30.670 } 00:17:30.670 ] 00:17:30.670 }' 00:17:30.670 [2024-07-14 10:10:01.179044] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:30.670 [2024-07-14 10:10:01.179754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89082 ] 00:17:30.928 [2024-07-14 10:10:01.313320] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.928 [2024-07-14 10:10:01.393664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:31.185 [2024-07-14 10:10:01.544295] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:31.750 10:10:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:31.750 10:10:02 -- common/autotest_common.sh@852 -- # return 0 00:17:31.750 10:10:02 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:31.750 Running I/O for 10 seconds... 00:17:41.729 00:17:41.729 Latency(us) 00:17:41.729 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.729 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:41.729 Verification LBA range: start 0x0 length 0x2000 00:17:41.729 TLSTESTn1 : 10.01 6709.80 26.21 0.00 0.00 19048.62 4230.05 19779.96 00:17:41.729 =================================================================================================================== 00:17:41.729 Total : 6709.80 26.21 0.00 0.00 19048.62 4230.05 19779.96 00:17:41.729 0 00:17:41.729 10:10:12 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:41.729 10:10:12 -- target/tls.sh@223 -- # killprocess 89082 00:17:41.729 10:10:12 -- common/autotest_common.sh@926 -- # '[' -z 89082 ']' 00:17:41.729 10:10:12 -- common/autotest_common.sh@930 -- # kill -0 89082 00:17:41.729 10:10:12 -- common/autotest_common.sh@931 -- # uname 00:17:41.729 10:10:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:41.729 10:10:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89082 00:17:41.988 10:10:12 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:41.988 10:10:12 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:41.988 killing process with pid 89082 00:17:41.988 10:10:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89082' 00:17:41.988 Received shutdown signal, test time was about 10.000000 seconds 00:17:41.988 00:17:41.988 Latency(us) 00:17:41.988 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.988 =================================================================================================================== 00:17:41.988 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:41.988 10:10:12 -- common/autotest_common.sh@945 -- # kill 89082 00:17:41.988 10:10:12 -- common/autotest_common.sh@950 -- # wait 89082 00:17:41.988 10:10:12 -- target/tls.sh@224 -- # killprocess 89039 00:17:41.988 10:10:12 -- common/autotest_common.sh@926 -- # '[' -z 89039 ']' 00:17:41.988 10:10:12 -- common/autotest_common.sh@930 -- # kill -0 89039 00:17:41.988 10:10:12 -- common/autotest_common.sh@931 -- # uname 00:17:41.988 10:10:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:41.988 10:10:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89039 00:17:41.988 10:10:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:41.988 10:10:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:41.988 10:10:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89039' 00:17:41.988 killing process with pid 89039 00:17:41.988 10:10:12 -- common/autotest_common.sh@945 -- # kill 89039 00:17:41.988 10:10:12 -- common/autotest_common.sh@950 -- # wait 89039 00:17:42.246 10:10:12 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:17:42.246 10:10:12 -- target/tls.sh@227 -- # cleanup 00:17:42.246 10:10:12 -- target/tls.sh@15 -- # process_shm --id 0 00:17:42.246 10:10:12 -- common/autotest_common.sh@796 -- # type=--id 00:17:42.246 10:10:12 -- common/autotest_common.sh@797 -- # id=0 00:17:42.246 10:10:12 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:42.246 10:10:12 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:42.246 10:10:12 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:42.246 10:10:12 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:42.246 10:10:12 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:42.246 10:10:12 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:42.246 nvmf_trace.0 00:17:42.246 10:10:12 -- common/autotest_common.sh@811 -- # return 0 00:17:42.246 10:10:12 -- target/tls.sh@16 -- # killprocess 89082 00:17:42.246 10:10:12 -- common/autotest_common.sh@926 -- # '[' -z 89082 ']' 00:17:42.246 10:10:12 -- common/autotest_common.sh@930 -- # kill -0 89082 00:17:42.246 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (89082) - No such process 00:17:42.246 Process with pid 89082 is not found 00:17:42.246 10:10:12 -- common/autotest_common.sh@953 -- # echo 'Process with pid 89082 is not found' 00:17:42.246 10:10:12 -- target/tls.sh@17 -- # nvmftestfini 00:17:42.246 10:10:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:42.246 10:10:12 -- nvmf/common.sh@116 -- # sync 00:17:42.505 10:10:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:42.505 10:10:12 -- nvmf/common.sh@119 -- # set +e 00:17:42.505 10:10:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:42.505 10:10:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:42.505 rmmod nvme_tcp 00:17:42.505 rmmod nvme_fabrics 00:17:42.505 rmmod nvme_keyring 00:17:42.505 10:10:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:42.505 10:10:12 -- nvmf/common.sh@123 -- # set -e 00:17:42.505 10:10:12 -- nvmf/common.sh@124 -- # return 0 00:17:42.505 10:10:12 -- nvmf/common.sh@477 -- # '[' -n 89039 ']' 00:17:42.505 10:10:12 -- nvmf/common.sh@478 -- # killprocess 89039 00:17:42.505 10:10:12 -- common/autotest_common.sh@926 -- # '[' -z 89039 ']' 00:17:42.505 10:10:12 -- common/autotest_common.sh@930 -- # kill -0 89039 00:17:42.505 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (89039) - No such process 00:17:42.505 10:10:12 -- common/autotest_common.sh@953 -- # echo 'Process with pid 89039 is not found' 00:17:42.505 Process with pid 89039 is not found 00:17:42.505 10:10:12 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:42.505 10:10:12 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:42.505 10:10:12 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:42.505 10:10:12 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:42.505 10:10:12 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:42.505 10:10:12 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.505 10:10:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.505 10:10:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.505 10:10:12 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:42.505 10:10:12 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:17:42.505 00:17:42.505 real 1m9.239s 00:17:42.505 user 1m45.401s 00:17:42.505 sys 0m24.640s 00:17:42.505 10:10:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:42.505 10:10:12 -- common/autotest_common.sh@10 -- # set +x 00:17:42.505 ************************************ 00:17:42.505 END TEST nvmf_tls 00:17:42.505 ************************************ 00:17:42.505 10:10:12 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:42.505 10:10:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:42.505 10:10:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:42.505 10:10:12 -- common/autotest_common.sh@10 -- # set +x 00:17:42.505 ************************************ 00:17:42.505 START TEST nvmf_fips 00:17:42.505 ************************************ 00:17:42.505 10:10:12 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:17:42.505 * Looking for test storage... 00:17:42.505 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:17:42.505 10:10:13 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:42.505 10:10:13 -- nvmf/common.sh@7 -- # uname -s 00:17:42.505 10:10:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:42.505 10:10:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:42.505 10:10:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:42.505 10:10:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:42.505 10:10:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:42.505 10:10:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:42.505 10:10:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:42.505 10:10:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:42.505 10:10:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:42.505 10:10:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:42.505 10:10:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:17:42.505 10:10:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:17:42.505 10:10:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:42.505 10:10:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:42.505 10:10:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:42.505 10:10:13 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:42.505 10:10:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:42.505 10:10:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:42.505 10:10:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:42.505 10:10:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.505 10:10:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.505 10:10:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.505 10:10:13 -- paths/export.sh@5 -- # export PATH 00:17:42.505 10:10:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.505 10:10:13 -- nvmf/common.sh@46 -- # : 0 00:17:42.505 10:10:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:42.505 10:10:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:42.505 10:10:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:42.505 10:10:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:42.505 10:10:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:42.506 10:10:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:42.506 10:10:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:42.506 10:10:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:42.765 10:10:13 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:42.765 10:10:13 -- fips/fips.sh@89 -- # check_openssl_version 00:17:42.765 10:10:13 -- fips/fips.sh@83 -- # local target=3.0.0 00:17:42.765 10:10:13 -- fips/fips.sh@85 -- # openssl version 00:17:42.765 10:10:13 -- fips/fips.sh@85 -- # awk '{print $2}' 00:17:42.765 10:10:13 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:17:42.765 10:10:13 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:17:42.765 10:10:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:42.765 10:10:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:42.765 10:10:13 -- scripts/common.sh@335 -- # IFS=.-: 00:17:42.765 10:10:13 -- scripts/common.sh@335 -- # read -ra ver1 00:17:42.765 10:10:13 -- scripts/common.sh@336 -- # IFS=.-: 00:17:42.765 10:10:13 -- scripts/common.sh@336 -- # read -ra ver2 00:17:42.765 10:10:13 -- scripts/common.sh@337 -- # local 'op=>=' 00:17:42.765 10:10:13 -- scripts/common.sh@339 -- # ver1_l=3 00:17:42.765 10:10:13 -- scripts/common.sh@340 -- # ver2_l=3 00:17:42.765 10:10:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:42.765 10:10:13 -- scripts/common.sh@343 -- # case "$op" in 00:17:42.765 10:10:13 -- scripts/common.sh@347 -- # : 1 00:17:42.765 10:10:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:42.765 10:10:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:42.765 10:10:13 -- scripts/common.sh@364 -- # decimal 3 00:17:42.765 10:10:13 -- scripts/common.sh@352 -- # local d=3 00:17:42.765 10:10:13 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:42.765 10:10:13 -- scripts/common.sh@354 -- # echo 3 00:17:42.765 10:10:13 -- scripts/common.sh@364 -- # ver1[v]=3 00:17:42.765 10:10:13 -- scripts/common.sh@365 -- # decimal 3 00:17:42.765 10:10:13 -- scripts/common.sh@352 -- # local d=3 00:17:42.765 10:10:13 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:17:42.765 10:10:13 -- scripts/common.sh@354 -- # echo 3 00:17:42.765 10:10:13 -- scripts/common.sh@365 -- # ver2[v]=3 00:17:42.765 10:10:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:42.765 10:10:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:42.765 10:10:13 -- scripts/common.sh@363 -- # (( v++ )) 00:17:42.765 10:10:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:42.765 10:10:13 -- scripts/common.sh@364 -- # decimal 0 00:17:42.765 10:10:13 -- scripts/common.sh@352 -- # local d=0 00:17:42.765 10:10:13 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:42.765 10:10:13 -- scripts/common.sh@354 -- # echo 0 00:17:42.765 10:10:13 -- scripts/common.sh@364 -- # ver1[v]=0 00:17:42.765 10:10:13 -- scripts/common.sh@365 -- # decimal 0 00:17:42.765 10:10:13 -- scripts/common.sh@352 -- # local d=0 00:17:42.765 10:10:13 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:42.765 10:10:13 -- scripts/common.sh@354 -- # echo 0 00:17:42.765 10:10:13 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:42.765 10:10:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:42.765 10:10:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:42.765 10:10:13 -- scripts/common.sh@363 -- # (( v++ )) 00:17:42.765 10:10:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:42.765 10:10:13 -- scripts/common.sh@364 -- # decimal 9 00:17:42.765 10:10:13 -- scripts/common.sh@352 -- # local d=9 00:17:42.765 10:10:13 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:17:42.765 10:10:13 -- scripts/common.sh@354 -- # echo 9 00:17:42.765 10:10:13 -- scripts/common.sh@364 -- # ver1[v]=9 00:17:42.765 10:10:13 -- scripts/common.sh@365 -- # decimal 0 00:17:42.765 10:10:13 -- scripts/common.sh@352 -- # local d=0 00:17:42.765 10:10:13 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:17:42.765 10:10:13 -- scripts/common.sh@354 -- # echo 0 00:17:42.765 10:10:13 -- scripts/common.sh@365 -- # ver2[v]=0 00:17:42.765 10:10:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:42.765 10:10:13 -- scripts/common.sh@366 -- # return 0 00:17:42.765 10:10:13 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:17:42.765 10:10:13 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:17:42.765 10:10:13 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:17:42.765 10:10:13 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:17:42.765 10:10:13 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:17:42.765 10:10:13 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:17:42.765 10:10:13 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:17:42.765 10:10:13 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:17:42.765 10:10:13 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:17:42.765 10:10:13 -- fips/fips.sh@114 -- # build_openssl_config 00:17:42.765 10:10:13 -- fips/fips.sh@37 -- # cat 00:17:42.765 10:10:13 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:17:42.765 10:10:13 -- fips/fips.sh@58 -- # cat - 00:17:42.765 10:10:13 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:17:42.765 10:10:13 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:17:42.765 10:10:13 -- fips/fips.sh@117 -- # mapfile -t providers 00:17:42.765 10:10:13 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:17:42.765 10:10:13 -- fips/fips.sh@117 -- # grep name 00:17:42.765 10:10:13 -- fips/fips.sh@117 -- # openssl list -providers 00:17:42.765 10:10:13 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:17:42.765 10:10:13 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:17:42.765 10:10:13 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:17:42.765 10:10:13 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:17:42.765 10:10:13 -- fips/fips.sh@128 -- # : 00:17:42.765 10:10:13 -- common/autotest_common.sh@640 -- # local es=0 00:17:42.765 10:10:13 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:17:42.765 10:10:13 -- common/autotest_common.sh@628 -- # local arg=openssl 00:17:42.765 10:10:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:42.765 10:10:13 -- common/autotest_common.sh@632 -- # type -t openssl 00:17:42.765 10:10:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:42.765 10:10:13 -- common/autotest_common.sh@634 -- # type -P openssl 00:17:42.765 10:10:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:17:42.765 10:10:13 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:17:42.765 10:10:13 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:17:42.765 10:10:13 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:17:42.765 Error setting digest 00:17:42.765 00B22E71A87F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:17:42.765 00B22E71A87F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:17:42.765 10:10:13 -- common/autotest_common.sh@643 -- # es=1 00:17:42.765 10:10:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:17:42.765 10:10:13 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:17:42.765 10:10:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:17:42.765 10:10:13 -- fips/fips.sh@131 -- # nvmftestinit 00:17:42.765 10:10:13 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:42.765 10:10:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:42.765 10:10:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:42.765 10:10:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:42.765 10:10:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:42.765 10:10:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.765 10:10:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.765 10:10:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.765 10:10:13 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:42.765 10:10:13 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:42.765 10:10:13 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:42.765 10:10:13 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:42.765 10:10:13 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:42.765 10:10:13 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:42.765 10:10:13 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:42.765 10:10:13 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:42.765 10:10:13 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:42.765 10:10:13 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:42.765 10:10:13 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:42.765 10:10:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:42.765 10:10:13 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:42.765 10:10:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:42.765 10:10:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:42.765 10:10:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:42.765 10:10:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:42.765 10:10:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:42.765 10:10:13 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:42.765 10:10:13 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:42.765 Cannot find device "nvmf_tgt_br" 00:17:42.765 10:10:13 -- nvmf/common.sh@154 -- # true 00:17:42.765 10:10:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:42.765 Cannot find device "nvmf_tgt_br2" 00:17:42.765 10:10:13 -- nvmf/common.sh@155 -- # true 00:17:42.765 10:10:13 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:42.765 10:10:13 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:42.765 Cannot find device "nvmf_tgt_br" 00:17:42.765 10:10:13 -- nvmf/common.sh@157 -- # true 00:17:42.765 10:10:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:42.765 Cannot find device "nvmf_tgt_br2" 00:17:42.765 10:10:13 -- nvmf/common.sh@158 -- # true 00:17:42.765 10:10:13 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:43.023 10:10:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:43.023 10:10:13 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:43.023 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:43.023 10:10:13 -- nvmf/common.sh@161 -- # true 00:17:43.023 10:10:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:43.023 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:43.023 10:10:13 -- nvmf/common.sh@162 -- # true 00:17:43.023 10:10:13 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:43.023 10:10:13 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:43.023 10:10:13 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:43.023 10:10:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:43.023 10:10:13 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:43.023 10:10:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:43.023 10:10:13 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:43.023 10:10:13 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:43.023 10:10:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:43.023 10:10:13 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:43.023 10:10:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:43.023 10:10:13 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:43.023 10:10:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:43.023 10:10:13 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:43.023 10:10:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:43.023 10:10:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:43.023 10:10:13 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:43.023 10:10:13 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:43.023 10:10:13 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:43.023 10:10:13 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:43.023 10:10:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:43.023 10:10:13 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:43.023 10:10:13 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:43.023 10:10:13 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:43.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:43.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:17:43.023 00:17:43.023 --- 10.0.0.2 ping statistics --- 00:17:43.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.023 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:17:43.023 10:10:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:43.023 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:43.023 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:17:43.023 00:17:43.023 --- 10.0.0.3 ping statistics --- 00:17:43.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.023 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:17:43.023 10:10:13 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:43.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:43.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:17:43.023 00:17:43.023 --- 10.0.0.1 ping statistics --- 00:17:43.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:43.023 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:17:43.023 10:10:13 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:43.023 10:10:13 -- nvmf/common.sh@421 -- # return 0 00:17:43.023 10:10:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:43.023 10:10:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:43.023 10:10:13 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:43.023 10:10:13 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:43.023 10:10:13 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:43.023 10:10:13 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:43.023 10:10:13 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:43.280 10:10:13 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:17:43.280 10:10:13 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:43.280 10:10:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:43.280 10:10:13 -- common/autotest_common.sh@10 -- # set +x 00:17:43.280 10:10:13 -- nvmf/common.sh@469 -- # nvmfpid=89450 00:17:43.280 10:10:13 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:43.280 10:10:13 -- nvmf/common.sh@470 -- # waitforlisten 89450 00:17:43.280 10:10:13 -- common/autotest_common.sh@819 -- # '[' -z 89450 ']' 00:17:43.280 10:10:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.280 10:10:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:43.280 10:10:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.280 10:10:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:43.280 10:10:13 -- common/autotest_common.sh@10 -- # set +x 00:17:43.280 [2024-07-14 10:10:13.667740] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:43.280 [2024-07-14 10:10:13.667825] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:43.280 [2024-07-14 10:10:13.806142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:43.538 [2024-07-14 10:10:13.868519] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:43.538 [2024-07-14 10:10:13.868661] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:43.538 [2024-07-14 10:10:13.868674] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:43.538 [2024-07-14 10:10:13.868681] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:43.538 [2024-07-14 10:10:13.868708] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.104 10:10:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:44.104 10:10:14 -- common/autotest_common.sh@852 -- # return 0 00:17:44.104 10:10:14 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:44.104 10:10:14 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:44.104 10:10:14 -- common/autotest_common.sh@10 -- # set +x 00:17:44.363 10:10:14 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:44.363 10:10:14 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:17:44.363 10:10:14 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:44.363 10:10:14 -- fips/fips.sh@138 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:44.363 10:10:14 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:17:44.363 10:10:14 -- fips/fips.sh@140 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:44.363 10:10:14 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:44.363 10:10:14 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:44.363 10:10:14 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:44.621 [2024-07-14 10:10:14.920667] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:44.621 [2024-07-14 10:10:14.936585] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:44.621 [2024-07-14 10:10:14.936774] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:44.621 malloc0 00:17:44.621 10:10:14 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:44.621 10:10:14 -- fips/fips.sh@148 -- # bdevperf_pid=89503 00:17:44.621 10:10:14 -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:44.621 10:10:14 -- fips/fips.sh@149 -- # waitforlisten 89503 /var/tmp/bdevperf.sock 00:17:44.621 10:10:14 -- common/autotest_common.sh@819 -- # '[' -z 89503 ']' 00:17:44.621 10:10:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:44.621 10:10:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:44.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:44.622 10:10:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:44.622 10:10:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:44.622 10:10:14 -- common/autotest_common.sh@10 -- # set +x 00:17:44.622 [2024-07-14 10:10:15.075546] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:44.622 [2024-07-14 10:10:15.075647] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89503 ] 00:17:44.880 [2024-07-14 10:10:15.216098] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.880 [2024-07-14 10:10:15.290877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:45.816 10:10:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:45.816 10:10:16 -- common/autotest_common.sh@852 -- # return 0 00:17:45.816 10:10:16 -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:45.816 [2024-07-14 10:10:16.227408] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:45.816 TLSTESTn1 00:17:45.817 10:10:16 -- fips/fips.sh@155 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:46.075 Running I/O for 10 seconds... 00:17:56.073 00:17:56.073 Latency(us) 00:17:56.073 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.074 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:56.074 Verification LBA range: start 0x0 length 0x2000 00:17:56.074 TLSTESTn1 : 10.02 6165.19 24.08 0.00 0.00 20726.23 4527.94 23950.43 00:17:56.074 =================================================================================================================== 00:17:56.074 Total : 6165.19 24.08 0.00 0.00 20726.23 4527.94 23950.43 00:17:56.074 0 00:17:56.074 10:10:26 -- fips/fips.sh@1 -- # cleanup 00:17:56.074 10:10:26 -- fips/fips.sh@15 -- # process_shm --id 0 00:17:56.074 10:10:26 -- common/autotest_common.sh@796 -- # type=--id 00:17:56.074 10:10:26 -- common/autotest_common.sh@797 -- # id=0 00:17:56.074 10:10:26 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:17:56.074 10:10:26 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:56.074 10:10:26 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:17:56.074 10:10:26 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:17:56.074 10:10:26 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:17:56.074 10:10:26 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:56.074 nvmf_trace.0 00:17:56.074 10:10:26 -- common/autotest_common.sh@811 -- # return 0 00:17:56.074 10:10:26 -- fips/fips.sh@16 -- # killprocess 89503 00:17:56.074 10:10:26 -- common/autotest_common.sh@926 -- # '[' -z 89503 ']' 00:17:56.074 10:10:26 -- common/autotest_common.sh@930 -- # kill -0 89503 00:17:56.074 10:10:26 -- common/autotest_common.sh@931 -- # uname 00:17:56.074 10:10:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:56.074 10:10:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89503 00:17:56.074 10:10:26 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:17:56.074 killing process with pid 89503 00:17:56.074 10:10:26 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:17:56.074 10:10:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89503' 00:17:56.074 10:10:26 -- common/autotest_common.sh@945 -- # kill 89503 00:17:56.074 Received shutdown signal, test time was about 10.000000 seconds 00:17:56.074 00:17:56.074 Latency(us) 00:17:56.074 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.074 =================================================================================================================== 00:17:56.074 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:56.074 10:10:26 -- common/autotest_common.sh@950 -- # wait 89503 00:17:56.333 10:10:26 -- fips/fips.sh@17 -- # nvmftestfini 00:17:56.333 10:10:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:56.333 10:10:26 -- nvmf/common.sh@116 -- # sync 00:17:56.333 10:10:26 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:56.333 10:10:26 -- nvmf/common.sh@119 -- # set +e 00:17:56.333 10:10:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:56.333 10:10:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:56.333 rmmod nvme_tcp 00:17:56.333 rmmod nvme_fabrics 00:17:56.333 rmmod nvme_keyring 00:17:56.333 10:10:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:56.333 10:10:26 -- nvmf/common.sh@123 -- # set -e 00:17:56.333 10:10:26 -- nvmf/common.sh@124 -- # return 0 00:17:56.333 10:10:26 -- nvmf/common.sh@477 -- # '[' -n 89450 ']' 00:17:56.333 10:10:26 -- nvmf/common.sh@478 -- # killprocess 89450 00:17:56.333 10:10:26 -- common/autotest_common.sh@926 -- # '[' -z 89450 ']' 00:17:56.333 10:10:26 -- common/autotest_common.sh@930 -- # kill -0 89450 00:17:56.592 10:10:26 -- common/autotest_common.sh@931 -- # uname 00:17:56.592 10:10:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:56.592 10:10:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89450 00:17:56.592 10:10:26 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:56.592 killing process with pid 89450 00:17:56.592 10:10:26 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:56.592 10:10:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89450' 00:17:56.592 10:10:26 -- common/autotest_common.sh@945 -- # kill 89450 00:17:56.592 10:10:26 -- common/autotest_common.sh@950 -- # wait 89450 00:17:56.592 10:10:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:56.592 10:10:27 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:56.592 10:10:27 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:56.592 10:10:27 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:56.592 10:10:27 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:56.592 10:10:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.592 10:10:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:56.592 10:10:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:56.592 10:10:27 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:56.592 10:10:27 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:17:56.592 00:17:56.592 real 0m14.175s 00:17:56.592 user 0m18.942s 00:17:56.592 sys 0m5.890s 00:17:56.592 ************************************ 00:17:56.592 END TEST nvmf_fips 00:17:56.592 ************************************ 00:17:56.592 10:10:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:56.592 10:10:27 -- common/autotest_common.sh@10 -- # set +x 00:17:56.852 10:10:27 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:17:56.852 10:10:27 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:56.852 10:10:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:56.852 10:10:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:56.852 10:10:27 -- common/autotest_common.sh@10 -- # set +x 00:17:56.852 ************************************ 00:17:56.852 START TEST nvmf_fuzz 00:17:56.852 ************************************ 00:17:56.852 10:10:27 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:17:56.852 * Looking for test storage... 00:17:56.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:56.852 10:10:27 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:56.852 10:10:27 -- nvmf/common.sh@7 -- # uname -s 00:17:56.852 10:10:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:56.852 10:10:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:56.852 10:10:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:56.852 10:10:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:56.852 10:10:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:56.852 10:10:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:56.852 10:10:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:56.852 10:10:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:56.852 10:10:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:56.852 10:10:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:56.852 10:10:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:17:56.852 10:10:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:17:56.852 10:10:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:56.852 10:10:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:56.852 10:10:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:56.852 10:10:27 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:56.852 10:10:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:56.852 10:10:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:56.852 10:10:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:56.852 10:10:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.852 10:10:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.852 10:10:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.852 10:10:27 -- paths/export.sh@5 -- # export PATH 00:17:56.852 10:10:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:56.852 10:10:27 -- nvmf/common.sh@46 -- # : 0 00:17:56.852 10:10:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:56.852 10:10:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:56.852 10:10:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:56.852 10:10:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:56.852 10:10:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:56.852 10:10:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:56.852 10:10:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:56.852 10:10:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:56.852 10:10:27 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:17:56.852 10:10:27 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:56.852 10:10:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:56.852 10:10:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:56.852 10:10:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:56.852 10:10:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:56.852 10:10:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:56.852 10:10:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:56.852 10:10:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:56.852 10:10:27 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:56.852 10:10:27 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:56.852 10:10:27 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:56.852 10:10:27 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:56.852 10:10:27 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:56.852 10:10:27 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:56.852 10:10:27 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:56.852 10:10:27 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:56.852 10:10:27 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:56.852 10:10:27 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:56.852 10:10:27 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:56.852 10:10:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:56.852 10:10:27 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:56.852 10:10:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:56.852 10:10:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:56.852 10:10:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:56.852 10:10:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:56.852 10:10:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:56.852 10:10:27 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:56.852 10:10:27 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:56.852 Cannot find device "nvmf_tgt_br" 00:17:56.852 10:10:27 -- nvmf/common.sh@154 -- # true 00:17:56.852 10:10:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:56.852 Cannot find device "nvmf_tgt_br2" 00:17:56.852 10:10:27 -- nvmf/common.sh@155 -- # true 00:17:56.852 10:10:27 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:56.852 10:10:27 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:56.852 Cannot find device "nvmf_tgt_br" 00:17:56.852 10:10:27 -- nvmf/common.sh@157 -- # true 00:17:56.852 10:10:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:56.852 Cannot find device "nvmf_tgt_br2" 00:17:56.852 10:10:27 -- nvmf/common.sh@158 -- # true 00:17:56.852 10:10:27 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:56.852 10:10:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:57.112 10:10:27 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:57.112 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:57.112 10:10:27 -- nvmf/common.sh@161 -- # true 00:17:57.112 10:10:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:57.112 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:57.112 10:10:27 -- nvmf/common.sh@162 -- # true 00:17:57.112 10:10:27 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:57.112 10:10:27 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:57.112 10:10:27 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:57.112 10:10:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:57.112 10:10:27 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:57.112 10:10:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:57.112 10:10:27 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:57.112 10:10:27 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:57.112 10:10:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:57.112 10:10:27 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:57.112 10:10:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:57.112 10:10:27 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:57.112 10:10:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:57.112 10:10:27 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:57.112 10:10:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:57.112 10:10:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:57.112 10:10:27 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:57.112 10:10:27 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:57.112 10:10:27 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:57.112 10:10:27 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:57.112 10:10:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:57.112 10:10:27 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:57.112 10:10:27 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:57.112 10:10:27 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:57.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:57.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.096 ms 00:17:57.112 00:17:57.112 --- 10.0.0.2 ping statistics --- 00:17:57.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.112 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:17:57.112 10:10:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:57.112 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:57.112 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.052 ms 00:17:57.112 00:17:57.112 --- 10.0.0.3 ping statistics --- 00:17:57.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.112 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:17:57.112 10:10:27 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:57.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:57.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:17:57.112 00:17:57.112 --- 10.0.0.1 ping statistics --- 00:17:57.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:57.112 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:17:57.112 10:10:27 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:57.112 10:10:27 -- nvmf/common.sh@421 -- # return 0 00:17:57.112 10:10:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:57.112 10:10:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:57.112 10:10:27 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:57.112 10:10:27 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:57.112 10:10:27 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:57.112 10:10:27 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:57.112 10:10:27 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:57.112 10:10:27 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=89839 00:17:57.112 10:10:27 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:57.112 10:10:27 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:57.112 10:10:27 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 89839 00:17:57.112 10:10:27 -- common/autotest_common.sh@819 -- # '[' -z 89839 ']' 00:17:57.112 10:10:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:57.112 10:10:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:57.112 10:10:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:57.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:57.112 10:10:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:57.112 10:10:27 -- common/autotest_common.sh@10 -- # set +x 00:17:58.049 10:10:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:58.049 10:10:28 -- common/autotest_common.sh@852 -- # return 0 00:17:58.049 10:10:28 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:58.049 10:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:58.049 10:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.308 10:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:58.308 10:10:28 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:17:58.308 10:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:58.308 10:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.308 Malloc0 00:17:58.308 10:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:58.308 10:10:28 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:58.308 10:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:58.308 10:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.308 10:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:58.308 10:10:28 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:58.308 10:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:58.308 10:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.308 10:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:58.308 10:10:28 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:58.308 10:10:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:58.308 10:10:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.308 10:10:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:58.308 10:10:28 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:17:58.308 10:10:28 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:17:58.567 Shutting down the fuzz application 00:17:58.567 10:10:28 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:17:58.826 Shutting down the fuzz application 00:17:58.826 10:10:29 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:58.826 10:10:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:58.826 10:10:29 -- common/autotest_common.sh@10 -- # set +x 00:17:58.826 10:10:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:58.826 10:10:29 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:17:58.826 10:10:29 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:17:58.826 10:10:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:58.826 10:10:29 -- nvmf/common.sh@116 -- # sync 00:17:59.086 10:10:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:59.086 10:10:29 -- nvmf/common.sh@119 -- # set +e 00:17:59.086 10:10:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:59.086 10:10:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:59.086 rmmod nvme_tcp 00:17:59.086 rmmod nvme_fabrics 00:17:59.086 rmmod nvme_keyring 00:17:59.086 10:10:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:59.086 10:10:29 -- nvmf/common.sh@123 -- # set -e 00:17:59.086 10:10:29 -- nvmf/common.sh@124 -- # return 0 00:17:59.086 10:10:29 -- nvmf/common.sh@477 -- # '[' -n 89839 ']' 00:17:59.086 10:10:29 -- nvmf/common.sh@478 -- # killprocess 89839 00:17:59.086 10:10:29 -- common/autotest_common.sh@926 -- # '[' -z 89839 ']' 00:17:59.086 10:10:29 -- common/autotest_common.sh@930 -- # kill -0 89839 00:17:59.086 10:10:29 -- common/autotest_common.sh@931 -- # uname 00:17:59.086 10:10:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:59.086 10:10:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 89839 00:17:59.086 10:10:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:59.086 10:10:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:59.086 killing process with pid 89839 00:17:59.086 10:10:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 89839' 00:17:59.086 10:10:29 -- common/autotest_common.sh@945 -- # kill 89839 00:17:59.086 10:10:29 -- common/autotest_common.sh@950 -- # wait 89839 00:17:59.346 10:10:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:59.346 10:10:29 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:59.346 10:10:29 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:59.346 10:10:29 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:59.346 10:10:29 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:59.346 10:10:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.346 10:10:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:59.346 10:10:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.346 10:10:29 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:59.346 10:10:29 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:17:59.346 00:17:59.346 real 0m2.564s 00:17:59.346 user 0m2.648s 00:17:59.346 sys 0m0.616s 00:17:59.346 10:10:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:59.346 ************************************ 00:17:59.346 10:10:29 -- common/autotest_common.sh@10 -- # set +x 00:17:59.346 END TEST nvmf_fuzz 00:17:59.346 ************************************ 00:17:59.346 10:10:29 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:17:59.346 10:10:29 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:59.346 10:10:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:59.346 10:10:29 -- common/autotest_common.sh@10 -- # set +x 00:17:59.346 ************************************ 00:17:59.346 START TEST nvmf_multiconnection 00:17:59.346 ************************************ 00:17:59.346 10:10:29 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:17:59.346 * Looking for test storage... 00:17:59.346 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:17:59.346 10:10:29 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:59.346 10:10:29 -- nvmf/common.sh@7 -- # uname -s 00:17:59.346 10:10:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:59.346 10:10:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:59.346 10:10:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:59.346 10:10:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:59.346 10:10:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:59.346 10:10:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:59.346 10:10:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:59.346 10:10:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:59.346 10:10:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:59.346 10:10:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:59.346 10:10:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:17:59.346 10:10:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:17:59.346 10:10:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:59.346 10:10:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:59.346 10:10:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:59.346 10:10:29 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:59.346 10:10:29 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:59.346 10:10:29 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:59.346 10:10:29 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:59.346 10:10:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.346 10:10:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.346 10:10:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.346 10:10:29 -- paths/export.sh@5 -- # export PATH 00:17:59.346 10:10:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:59.346 10:10:29 -- nvmf/common.sh@46 -- # : 0 00:17:59.346 10:10:29 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:59.346 10:10:29 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:59.346 10:10:29 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:59.346 10:10:29 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:59.346 10:10:29 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:59.346 10:10:29 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:59.346 10:10:29 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:59.346 10:10:29 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:59.346 10:10:29 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:59.346 10:10:29 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:59.346 10:10:29 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:17:59.346 10:10:29 -- target/multiconnection.sh@16 -- # nvmftestinit 00:17:59.346 10:10:29 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:59.346 10:10:29 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:59.346 10:10:29 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:59.346 10:10:29 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:59.346 10:10:29 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:59.346 10:10:29 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:59.346 10:10:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:59.346 10:10:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:59.606 10:10:29 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:59.606 10:10:29 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:59.606 10:10:29 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:59.606 10:10:29 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:59.606 10:10:29 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:59.606 10:10:29 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:59.606 10:10:29 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:59.606 10:10:29 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:59.606 10:10:29 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:59.606 10:10:29 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:59.606 10:10:29 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:59.606 10:10:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:59.606 10:10:29 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:59.606 10:10:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:59.606 10:10:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:59.606 10:10:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:59.606 10:10:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:59.606 10:10:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:59.606 10:10:29 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:59.606 10:10:29 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:59.606 Cannot find device "nvmf_tgt_br" 00:17:59.606 10:10:29 -- nvmf/common.sh@154 -- # true 00:17:59.606 10:10:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:59.606 Cannot find device "nvmf_tgt_br2" 00:17:59.606 10:10:29 -- nvmf/common.sh@155 -- # true 00:17:59.606 10:10:29 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:59.606 10:10:29 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:59.606 Cannot find device "nvmf_tgt_br" 00:17:59.606 10:10:29 -- nvmf/common.sh@157 -- # true 00:17:59.606 10:10:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:59.606 Cannot find device "nvmf_tgt_br2" 00:17:59.606 10:10:29 -- nvmf/common.sh@158 -- # true 00:17:59.606 10:10:29 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:59.606 10:10:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:59.606 10:10:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:59.606 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:59.606 10:10:30 -- nvmf/common.sh@161 -- # true 00:17:59.606 10:10:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:59.606 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:59.606 10:10:30 -- nvmf/common.sh@162 -- # true 00:17:59.606 10:10:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:59.606 10:10:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:59.606 10:10:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:59.606 10:10:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:59.606 10:10:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:59.606 10:10:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:59.606 10:10:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:59.606 10:10:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:59.606 10:10:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:59.606 10:10:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:59.606 10:10:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:59.606 10:10:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:59.606 10:10:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:59.606 10:10:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:59.606 10:10:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:59.606 10:10:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:59.606 10:10:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:59.606 10:10:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:59.606 10:10:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:59.865 10:10:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:59.865 10:10:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:59.865 10:10:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:59.865 10:10:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:59.865 10:10:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:59.865 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:59.865 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.082 ms 00:17:59.865 00:17:59.865 --- 10.0.0.2 ping statistics --- 00:17:59.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.865 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:17:59.865 10:10:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:59.865 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:59.865 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:17:59.865 00:17:59.865 --- 10.0.0.3 ping statistics --- 00:17:59.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.865 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:17:59.865 10:10:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:59.865 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:59.865 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.055 ms 00:17:59.865 00:17:59.865 --- 10.0.0.1 ping statistics --- 00:17:59.865 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.865 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:17:59.865 10:10:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:59.865 10:10:30 -- nvmf/common.sh@421 -- # return 0 00:17:59.865 10:10:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:59.865 10:10:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:59.865 10:10:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:59.865 10:10:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:59.865 10:10:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:59.865 10:10:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:59.865 10:10:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:59.865 10:10:30 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:17:59.865 10:10:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:59.865 10:10:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:59.865 10:10:30 -- common/autotest_common.sh@10 -- # set +x 00:17:59.865 10:10:30 -- nvmf/common.sh@469 -- # nvmfpid=90050 00:17:59.865 10:10:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:59.865 10:10:30 -- nvmf/common.sh@470 -- # waitforlisten 90050 00:17:59.865 10:10:30 -- common/autotest_common.sh@819 -- # '[' -z 90050 ']' 00:17:59.865 10:10:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.865 10:10:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:59.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.865 10:10:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.865 10:10:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:59.865 10:10:30 -- common/autotest_common.sh@10 -- # set +x 00:17:59.865 [2024-07-14 10:10:30.297969] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:17:59.865 [2024-07-14 10:10:30.298081] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:00.123 [2024-07-14 10:10:30.440956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:00.123 [2024-07-14 10:10:30.534065] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:00.123 [2024-07-14 10:10:30.534248] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:00.123 [2024-07-14 10:10:30.534262] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:00.123 [2024-07-14 10:10:30.534270] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:00.123 [2024-07-14 10:10:30.534357] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:00.123 [2024-07-14 10:10:30.534838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:00.123 [2024-07-14 10:10:30.535288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:00.123 [2024-07-14 10:10:30.535295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:01.061 10:10:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:01.061 10:10:31 -- common/autotest_common.sh@852 -- # return 0 00:18:01.061 10:10:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:01.061 10:10:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:01.061 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.061 10:10:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.061 10:10:31 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:01.061 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.061 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.061 [2024-07-14 10:10:31.338902] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.061 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.061 10:10:31 -- target/multiconnection.sh@21 -- # seq 1 11 00:18:01.061 10:10:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:01.061 10:10:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:01.061 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.061 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.061 Malloc1 00:18:01.061 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.061 10:10:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:18:01.061 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.061 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.061 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.061 10:10:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:01.061 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.061 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.061 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.061 10:10:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:01.061 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.061 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.061 [2024-07-14 10:10:31.409203] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.061 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.061 10:10:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:01.061 10:10:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:01.061 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.061 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.061 Malloc2 00:18:01.061 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.061 10:10:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:01.061 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.061 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.061 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.061 10:10:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:01.061 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.061 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.061 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.061 10:10:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:01.061 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.061 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.061 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.061 10:10:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:01.061 10:10:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:01.061 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.061 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.061 Malloc3 00:18:01.061 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.061 10:10:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:18:01.061 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.061 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.061 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.061 10:10:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:01.061 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.061 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.061 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.061 10:10:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:18:01.061 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.061 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.061 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.061 10:10:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:01.061 10:10:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:01.061 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.061 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.061 Malloc4 00:18:01.061 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.061 10:10:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:18:01.061 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.061 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.061 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.061 10:10:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:01.061 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.061 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.061 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.061 10:10:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:18:01.061 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.061 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.061 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.061 10:10:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:01.061 10:10:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:01.061 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.061 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.061 Malloc5 00:18:01.061 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.061 10:10:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:18:01.061 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.061 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.061 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.061 10:10:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:01.061 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.061 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.061 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.061 10:10:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:18:01.061 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.061 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.321 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.321 10:10:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:01.321 10:10:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:18:01.321 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.321 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.321 Malloc6 00:18:01.321 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.321 10:10:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:18:01.321 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.321 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.321 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.321 10:10:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:18:01.321 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.321 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.321 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.321 10:10:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:18:01.321 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.321 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.321 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.321 10:10:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:01.321 10:10:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:18:01.321 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.321 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.321 Malloc7 00:18:01.321 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.321 10:10:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:18:01.321 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.321 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.321 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.321 10:10:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:18:01.321 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.321 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.321 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.321 10:10:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:18:01.321 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.321 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.321 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.321 10:10:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:01.321 10:10:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:18:01.321 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.321 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.321 Malloc8 00:18:01.321 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.321 10:10:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:18:01.321 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.321 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.321 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.321 10:10:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:18:01.321 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.321 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.321 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.321 10:10:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:18:01.321 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.321 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.321 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.321 10:10:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:01.321 10:10:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:18:01.321 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.321 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.321 Malloc9 00:18:01.321 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.321 10:10:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:18:01.321 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.321 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.321 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.321 10:10:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:18:01.321 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.321 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.321 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.321 10:10:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:18:01.321 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.321 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.321 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.321 10:10:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:01.321 10:10:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:18:01.321 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.321 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.321 Malloc10 00:18:01.321 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.321 10:10:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:18:01.321 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.321 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.321 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.321 10:10:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:18:01.321 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.321 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.321 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.321 10:10:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:18:01.321 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.321 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.580 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.580 10:10:31 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:01.580 10:10:31 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:18:01.580 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.580 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.580 Malloc11 00:18:01.580 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.580 10:10:31 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:18:01.580 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.580 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.580 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.580 10:10:31 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:18:01.580 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.580 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.580 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.580 10:10:31 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:18:01.580 10:10:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:01.580 10:10:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.580 10:10:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:01.580 10:10:31 -- target/multiconnection.sh@28 -- # seq 1 11 00:18:01.580 10:10:31 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:01.580 10:10:31 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:01.580 10:10:32 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:18:01.580 10:10:32 -- common/autotest_common.sh@1177 -- # local i=0 00:18:01.580 10:10:32 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:01.580 10:10:32 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:01.580 10:10:32 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:04.108 10:10:34 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:04.108 10:10:34 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:18:04.108 10:10:34 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:04.108 10:10:34 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:04.108 10:10:34 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:04.108 10:10:34 -- common/autotest_common.sh@1187 -- # return 0 00:18:04.108 10:10:34 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:04.108 10:10:34 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:18:04.108 10:10:34 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:04.108 10:10:34 -- common/autotest_common.sh@1177 -- # local i=0 00:18:04.108 10:10:34 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:04.108 10:10:34 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:04.108 10:10:34 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:06.009 10:10:36 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:06.009 10:10:36 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:06.009 10:10:36 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:18:06.009 10:10:36 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:06.009 10:10:36 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:06.009 10:10:36 -- common/autotest_common.sh@1187 -- # return 0 00:18:06.009 10:10:36 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:06.009 10:10:36 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:18:06.009 10:10:36 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:06.009 10:10:36 -- common/autotest_common.sh@1177 -- # local i=0 00:18:06.009 10:10:36 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:06.009 10:10:36 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:06.009 10:10:36 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:07.950 10:10:38 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:07.950 10:10:38 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:07.950 10:10:38 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:18:08.208 10:10:38 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:08.208 10:10:38 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:08.208 10:10:38 -- common/autotest_common.sh@1187 -- # return 0 00:18:08.208 10:10:38 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:08.208 10:10:38 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:18:08.208 10:10:38 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:08.208 10:10:38 -- common/autotest_common.sh@1177 -- # local i=0 00:18:08.208 10:10:38 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:08.208 10:10:38 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:08.208 10:10:38 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:10.737 10:10:40 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:10.737 10:10:40 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:10.737 10:10:40 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:18:10.737 10:10:40 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:10.737 10:10:40 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:10.737 10:10:40 -- common/autotest_common.sh@1187 -- # return 0 00:18:10.737 10:10:40 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:10.737 10:10:40 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:18:10.737 10:10:40 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:10.737 10:10:40 -- common/autotest_common.sh@1177 -- # local i=0 00:18:10.737 10:10:40 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:10.737 10:10:40 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:10.737 10:10:40 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:12.640 10:10:42 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:12.640 10:10:42 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:12.640 10:10:42 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:18:12.640 10:10:42 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:12.640 10:10:42 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:12.640 10:10:42 -- common/autotest_common.sh@1187 -- # return 0 00:18:12.640 10:10:42 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:12.640 10:10:42 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:18:12.640 10:10:43 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:12.640 10:10:43 -- common/autotest_common.sh@1177 -- # local i=0 00:18:12.640 10:10:43 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:12.640 10:10:43 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:12.640 10:10:43 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:14.542 10:10:45 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:14.542 10:10:45 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:14.542 10:10:45 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:18:14.801 10:10:45 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:14.801 10:10:45 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:14.801 10:10:45 -- common/autotest_common.sh@1187 -- # return 0 00:18:14.801 10:10:45 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:14.801 10:10:45 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:18:14.801 10:10:45 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:18:14.801 10:10:45 -- common/autotest_common.sh@1177 -- # local i=0 00:18:14.801 10:10:45 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:14.801 10:10:45 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:14.801 10:10:45 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:17.335 10:10:47 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:17.335 10:10:47 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:17.335 10:10:47 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:18:17.335 10:10:47 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:17.335 10:10:47 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:17.335 10:10:47 -- common/autotest_common.sh@1187 -- # return 0 00:18:17.335 10:10:47 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:17.335 10:10:47 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:18:17.335 10:10:47 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:18:17.335 10:10:47 -- common/autotest_common.sh@1177 -- # local i=0 00:18:17.335 10:10:47 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:17.335 10:10:47 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:17.335 10:10:47 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:19.237 10:10:49 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:19.237 10:10:49 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:19.237 10:10:49 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:18:19.237 10:10:49 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:19.237 10:10:49 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:19.237 10:10:49 -- common/autotest_common.sh@1187 -- # return 0 00:18:19.237 10:10:49 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:19.238 10:10:49 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:18:19.238 10:10:49 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:18:19.238 10:10:49 -- common/autotest_common.sh@1177 -- # local i=0 00:18:19.238 10:10:49 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:19.238 10:10:49 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:19.238 10:10:49 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:21.140 10:10:51 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:21.398 10:10:51 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:21.398 10:10:51 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:18:21.398 10:10:51 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:21.398 10:10:51 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:21.398 10:10:51 -- common/autotest_common.sh@1187 -- # return 0 00:18:21.398 10:10:51 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:21.399 10:10:51 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:18:21.399 10:10:51 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:18:21.399 10:10:51 -- common/autotest_common.sh@1177 -- # local i=0 00:18:21.399 10:10:51 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:21.399 10:10:51 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:21.399 10:10:51 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:23.939 10:10:53 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:23.939 10:10:53 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:23.939 10:10:53 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:18:23.939 10:10:53 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:23.939 10:10:53 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:23.939 10:10:53 -- common/autotest_common.sh@1187 -- # return 0 00:18:23.939 10:10:53 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:23.939 10:10:53 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:18:23.939 10:10:54 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:18:23.939 10:10:54 -- common/autotest_common.sh@1177 -- # local i=0 00:18:23.939 10:10:54 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:23.939 10:10:54 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:23.939 10:10:54 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:25.839 10:10:56 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:25.839 10:10:56 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:25.839 10:10:56 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:18:25.839 10:10:56 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:25.839 10:10:56 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:25.839 10:10:56 -- common/autotest_common.sh@1187 -- # return 0 00:18:25.839 10:10:56 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:18:25.839 [global] 00:18:25.839 thread=1 00:18:25.839 invalidate=1 00:18:25.839 rw=read 00:18:25.839 time_based=1 00:18:25.839 runtime=10 00:18:25.839 ioengine=libaio 00:18:25.839 direct=1 00:18:25.839 bs=262144 00:18:25.839 iodepth=64 00:18:25.839 norandommap=1 00:18:25.839 numjobs=1 00:18:25.839 00:18:25.839 [job0] 00:18:25.839 filename=/dev/nvme0n1 00:18:25.839 [job1] 00:18:25.839 filename=/dev/nvme10n1 00:18:25.839 [job2] 00:18:25.839 filename=/dev/nvme1n1 00:18:25.839 [job3] 00:18:25.839 filename=/dev/nvme2n1 00:18:25.839 [job4] 00:18:25.839 filename=/dev/nvme3n1 00:18:25.839 [job5] 00:18:25.839 filename=/dev/nvme4n1 00:18:25.839 [job6] 00:18:25.839 filename=/dev/nvme5n1 00:18:25.839 [job7] 00:18:25.839 filename=/dev/nvme6n1 00:18:25.839 [job8] 00:18:25.839 filename=/dev/nvme7n1 00:18:25.839 [job9] 00:18:25.839 filename=/dev/nvme8n1 00:18:25.839 [job10] 00:18:25.839 filename=/dev/nvme9n1 00:18:25.839 Could not set queue depth (nvme0n1) 00:18:25.839 Could not set queue depth (nvme10n1) 00:18:25.839 Could not set queue depth (nvme1n1) 00:18:25.839 Could not set queue depth (nvme2n1) 00:18:25.839 Could not set queue depth (nvme3n1) 00:18:25.839 Could not set queue depth (nvme4n1) 00:18:25.839 Could not set queue depth (nvme5n1) 00:18:25.839 Could not set queue depth (nvme6n1) 00:18:25.840 Could not set queue depth (nvme7n1) 00:18:25.840 Could not set queue depth (nvme8n1) 00:18:25.840 Could not set queue depth (nvme9n1) 00:18:26.098 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:26.098 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:26.098 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:26.098 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:26.098 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:26.098 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:26.098 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:26.098 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:26.098 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:26.098 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:26.098 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:26.098 fio-3.35 00:18:26.098 Starting 11 threads 00:18:38.338 00:18:38.338 job0: (groupid=0, jobs=1): err= 0: pid=90528: Sun Jul 14 10:11:06 2024 00:18:38.338 read: IOPS=981, BW=245MiB/s (257MB/s)(2462MiB/10030msec) 00:18:38.338 slat (usec): min=15, max=69774, avg=970.04, stdev=3838.33 00:18:38.338 clat (msec): min=20, max=225, avg=64.12, stdev=28.72 00:18:38.338 lat (msec): min=20, max=238, avg=65.09, stdev=29.23 00:18:38.338 clat percentiles (msec): 00:18:38.338 | 1.00th=[ 32], 5.00th=[ 41], 10.00th=[ 43], 20.00th=[ 47], 00:18:38.338 | 30.00th=[ 50], 40.00th=[ 53], 50.00th=[ 56], 60.00th=[ 59], 00:18:38.338 | 70.00th=[ 64], 80.00th=[ 79], 90.00th=[ 92], 95.00th=[ 144], 00:18:38.338 | 99.00th=[ 178], 99.50th=[ 182], 99.90th=[ 188], 99.95th=[ 194], 00:18:38.338 | 99.99th=[ 226] 00:18:38.338 bw ( KiB/s): min=94720, max=324982, per=12.59%, avg=250365.00, stdev=76258.58, samples=20 00:18:38.338 iops : min= 370, max= 1269, avg=977.90, stdev=297.83, samples=20 00:18:38.338 lat (msec) : 50=31.31%, 100=62.55%, 250=6.13% 00:18:38.339 cpu : usr=0.32%, sys=3.03%, ctx=1794, majf=0, minf=4097 00:18:38.339 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:38.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:38.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:38.339 issued rwts: total=9846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:38.339 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:38.339 job1: (groupid=0, jobs=1): err= 0: pid=90529: Sun Jul 14 10:11:06 2024 00:18:38.339 read: IOPS=361, BW=90.3MiB/s (94.7MB/s)(914MiB/10121msec) 00:18:38.339 slat (usec): min=16, max=99368, avg=2673.66, stdev=8861.02 00:18:38.339 clat (usec): min=1226, max=332072, avg=174222.17, stdev=57736.61 00:18:38.339 lat (usec): min=1255, max=356796, avg=176895.83, stdev=59221.74 00:18:38.339 clat percentiles (msec): 00:18:38.339 | 1.00th=[ 3], 5.00th=[ 34], 10.00th=[ 93], 20.00th=[ 144], 00:18:38.339 | 30.00th=[ 155], 40.00th=[ 171], 50.00th=[ 180], 60.00th=[ 205], 00:18:38.339 | 70.00th=[ 213], 80.00th=[ 220], 90.00th=[ 228], 95.00th=[ 234], 00:18:38.339 | 99.00th=[ 279], 99.50th=[ 313], 99.90th=[ 334], 99.95th=[ 334], 00:18:38.339 | 99.99th=[ 334] 00:18:38.339 bw ( KiB/s): min=66048, max=153088, per=4.63%, avg=92009.30, stdev=24977.55, samples=20 00:18:38.339 iops : min= 258, max= 598, avg=359.30, stdev=97.61, samples=20 00:18:38.339 lat (msec) : 2=0.16%, 4=2.27%, 10=1.86%, 50=1.12%, 100=5.72% 00:18:38.339 lat (msec) : 250=86.87%, 500=2.00% 00:18:38.339 cpu : usr=0.13%, sys=1.14%, ctx=820, majf=0, minf=4097 00:18:38.339 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:38.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:38.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:38.339 issued rwts: total=3656,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:38.339 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:38.339 job2: (groupid=0, jobs=1): err= 0: pid=90530: Sun Jul 14 10:11:06 2024 00:18:38.339 read: IOPS=496, BW=124MiB/s (130MB/s)(1259MiB/10148msec) 00:18:38.339 slat (usec): min=16, max=136096, avg=1953.49, stdev=8230.65 00:18:38.339 clat (msec): min=37, max=371, avg=126.81, stdev=67.18 00:18:38.339 lat (msec): min=37, max=371, avg=128.76, stdev=68.60 00:18:38.339 clat percentiles (msec): 00:18:38.339 | 1.00th=[ 44], 5.00th=[ 52], 10.00th=[ 55], 20.00th=[ 62], 00:18:38.339 | 30.00th=[ 78], 40.00th=[ 87], 50.00th=[ 93], 60.00th=[ 115], 00:18:38.339 | 70.00th=[ 197], 80.00th=[ 205], 90.00th=[ 215], 95.00th=[ 222], 00:18:38.339 | 99.00th=[ 264], 99.50th=[ 284], 99.90th=[ 372], 99.95th=[ 372], 00:18:38.339 | 99.99th=[ 372] 00:18:38.339 bw ( KiB/s): min=64383, max=286208, per=6.53%, avg=129805.37, stdev=69272.04, samples=19 00:18:38.339 iops : min= 251, max= 1118, avg=507.00, stdev=270.60, samples=19 00:18:38.339 lat (msec) : 50=3.49%, 100=53.25%, 250=42.07%, 500=1.19% 00:18:38.339 cpu : usr=0.22%, sys=1.51%, ctx=1085, majf=0, minf=4097 00:18:38.339 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:18:38.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:38.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:38.339 issued rwts: total=5037,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:38.339 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:38.339 job3: (groupid=0, jobs=1): err= 0: pid=90531: Sun Jul 14 10:11:06 2024 00:18:38.339 read: IOPS=352, BW=88.1MiB/s (92.4MB/s)(896MiB/10159msec) 00:18:38.339 slat (usec): min=16, max=179828, avg=2790.14, stdev=12245.24 00:18:38.339 clat (msec): min=14, max=464, avg=178.47, stdev=46.49 00:18:38.339 lat (msec): min=14, max=464, avg=181.26, stdev=48.48 00:18:38.339 clat percentiles (msec): 00:18:38.339 | 1.00th=[ 39], 5.00th=[ 87], 10.00th=[ 134], 20.00th=[ 144], 00:18:38.339 | 30.00th=[ 157], 40.00th=[ 169], 50.00th=[ 184], 60.00th=[ 201], 00:18:38.339 | 70.00th=[ 209], 80.00th=[ 213], 90.00th=[ 222], 95.00th=[ 234], 00:18:38.339 | 99.00th=[ 292], 99.50th=[ 305], 99.90th=[ 388], 99.95th=[ 464], 00:18:38.339 | 99.99th=[ 464] 00:18:38.339 bw ( KiB/s): min=62976, max=172032, per=4.53%, avg=90036.05, stdev=24868.12, samples=20 00:18:38.339 iops : min= 246, max= 672, avg=351.65, stdev=97.16, samples=20 00:18:38.339 lat (msec) : 20=0.42%, 50=1.14%, 100=5.70%, 250=90.12%, 500=2.62% 00:18:38.339 cpu : usr=0.16%, sys=1.12%, ctx=696, majf=0, minf=4097 00:18:38.339 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:18:38.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:38.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:38.339 issued rwts: total=3582,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:38.339 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:38.339 job4: (groupid=0, jobs=1): err= 0: pid=90532: Sun Jul 14 10:11:06 2024 00:18:38.339 read: IOPS=352, BW=88.2MiB/s (92.4MB/s)(895MiB/10147msec) 00:18:38.339 slat (usec): min=22, max=113697, avg=2809.16, stdev=11035.19 00:18:38.339 clat (msec): min=28, max=324, avg=178.45, stdev=40.13 00:18:38.339 lat (msec): min=28, max=396, avg=181.26, stdev=42.05 00:18:38.339 clat percentiles (msec): 00:18:38.339 | 1.00th=[ 59], 5.00th=[ 90], 10.00th=[ 138], 20.00th=[ 146], 00:18:38.339 | 30.00th=[ 159], 40.00th=[ 171], 50.00th=[ 186], 60.00th=[ 201], 00:18:38.339 | 70.00th=[ 207], 80.00th=[ 213], 90.00th=[ 220], 95.00th=[ 226], 00:18:38.339 | 99.00th=[ 243], 99.50th=[ 271], 99.90th=[ 317], 99.95th=[ 321], 00:18:38.339 | 99.99th=[ 326] 00:18:38.339 bw ( KiB/s): min=64512, max=149803, per=4.54%, avg=90215.37, stdev=20692.11, samples=19 00:18:38.339 iops : min= 252, max= 585, avg=352.32, stdev=80.75, samples=19 00:18:38.339 lat (msec) : 50=0.14%, 100=5.65%, 250=93.43%, 500=0.78% 00:18:38.339 cpu : usr=0.21%, sys=1.44%, ctx=550, majf=0, minf=4097 00:18:38.339 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:18:38.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:38.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:38.339 issued rwts: total=3578,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:38.339 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:38.339 job5: (groupid=0, jobs=1): err= 0: pid=90533: Sun Jul 14 10:11:06 2024 00:18:38.339 read: IOPS=656, BW=164MiB/s (172MB/s)(1668MiB/10158msec) 00:18:38.339 slat (usec): min=19, max=185398, avg=1501.42, stdev=7640.87 00:18:38.339 clat (msec): min=15, max=402, avg=95.77, stdev=67.98 00:18:38.339 lat (msec): min=15, max=402, avg=97.28, stdev=69.35 00:18:38.339 clat percentiles (msec): 00:18:38.339 | 1.00th=[ 26], 5.00th=[ 36], 10.00th=[ 47], 20.00th=[ 53], 00:18:38.339 | 30.00th=[ 56], 40.00th=[ 59], 50.00th=[ 63], 60.00th=[ 65], 00:18:38.339 | 70.00th=[ 73], 80.00th=[ 201], 90.00th=[ 213], 95.00th=[ 220], 00:18:38.339 | 99.00th=[ 243], 99.50th=[ 253], 99.90th=[ 363], 99.95th=[ 363], 00:18:38.339 | 99.99th=[ 401] 00:18:38.339 bw ( KiB/s): min=63488, max=288256, per=8.51%, avg=169181.50, stdev=99353.65, samples=20 00:18:38.339 iops : min= 248, max= 1126, avg=660.80, stdev=388.15, samples=20 00:18:38.339 lat (msec) : 20=0.27%, 50=13.70%, 100=59.66%, 250=25.63%, 500=0.75% 00:18:38.339 cpu : usr=0.22%, sys=1.88%, ctx=1596, majf=0, minf=4097 00:18:38.339 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:18:38.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:38.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:38.339 issued rwts: total=6673,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:38.339 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:38.339 job6: (groupid=0, jobs=1): err= 0: pid=90534: Sun Jul 14 10:11:06 2024 00:18:38.339 read: IOPS=629, BW=157MiB/s (165MB/s)(1599MiB/10159msec) 00:18:38.339 slat (usec): min=16, max=136384, avg=1563.43, stdev=7088.62 00:18:38.339 clat (msec): min=22, max=327, avg=99.95, stdev=68.51 00:18:38.339 lat (msec): min=22, max=345, avg=101.52, stdev=69.81 00:18:38.339 clat percentiles (msec): 00:18:38.339 | 1.00th=[ 41], 5.00th=[ 48], 10.00th=[ 52], 20.00th=[ 55], 00:18:38.339 | 30.00th=[ 58], 40.00th=[ 61], 50.00th=[ 63], 60.00th=[ 67], 00:18:38.339 | 70.00th=[ 74], 80.00th=[ 205], 90.00th=[ 215], 95.00th=[ 224], 00:18:38.339 | 99.00th=[ 241], 99.50th=[ 259], 99.90th=[ 317], 99.95th=[ 317], 00:18:38.339 | 99.99th=[ 330] 00:18:38.339 bw ( KiB/s): min=64000, max=282624, per=8.15%, avg=162049.10, stdev=98945.94, samples=20 00:18:38.339 iops : min= 250, max= 1104, avg=632.95, stdev=386.55, samples=20 00:18:38.339 lat (msec) : 50=8.13%, 100=65.86%, 250=25.47%, 500=0.53% 00:18:38.339 cpu : usr=0.21%, sys=1.99%, ctx=1335, majf=0, minf=4097 00:18:38.339 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:18:38.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:38.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:38.339 issued rwts: total=6395,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:38.339 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:38.339 job7: (groupid=0, jobs=1): err= 0: pid=90535: Sun Jul 14 10:11:06 2024 00:18:38.339 read: IOPS=347, BW=86.9MiB/s (91.1MB/s)(882MiB/10155msec) 00:18:38.339 slat (usec): min=15, max=82712, avg=2808.39, stdev=9378.67 00:18:38.339 clat (msec): min=24, max=371, avg=181.10, stdev=46.55 00:18:38.339 lat (msec): min=24, max=371, avg=183.91, stdev=48.00 00:18:38.339 clat percentiles (msec): 00:18:38.339 | 1.00th=[ 59], 5.00th=[ 88], 10.00th=[ 136], 20.00th=[ 148], 00:18:38.339 | 30.00th=[ 159], 40.00th=[ 174], 50.00th=[ 184], 60.00th=[ 205], 00:18:38.339 | 70.00th=[ 211], 80.00th=[ 220], 90.00th=[ 226], 95.00th=[ 232], 00:18:38.339 | 99.00th=[ 288], 99.50th=[ 351], 99.90th=[ 359], 99.95th=[ 372], 00:18:38.339 | 99.99th=[ 372] 00:18:38.339 bw ( KiB/s): min=68745, max=161792, per=4.46%, avg=88721.90, stdev=22627.78, samples=20 00:18:38.339 iops : min= 268, max= 632, avg=346.50, stdev=88.45, samples=20 00:18:38.339 lat (msec) : 50=0.99%, 100=6.26%, 250=90.88%, 500=1.87% 00:18:38.339 cpu : usr=0.14%, sys=1.11%, ctx=754, majf=0, minf=4097 00:18:38.339 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:18:38.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:38.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:38.339 issued rwts: total=3529,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:38.339 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:38.339 job8: (groupid=0, jobs=1): err= 0: pid=90536: Sun Jul 14 10:11:06 2024 00:18:38.339 read: IOPS=1468, BW=367MiB/s (385MB/s)(3682MiB/10028msec) 00:18:38.339 slat (usec): min=20, max=43169, avg=674.68, stdev=2725.34 00:18:38.339 clat (usec): min=12427, max=99829, avg=42822.37, stdev=15799.74 00:18:38.339 lat (msec): min=12, max=124, avg=43.50, stdev=16.09 00:18:38.339 clat percentiles (msec): 00:18:38.339 | 1.00th=[ 19], 5.00th=[ 23], 10.00th=[ 25], 20.00th=[ 28], 00:18:38.339 | 30.00th=[ 31], 40.00th=[ 34], 50.00th=[ 41], 60.00th=[ 49], 00:18:38.339 | 70.00th=[ 54], 80.00th=[ 58], 90.00th=[ 64], 95.00th=[ 68], 00:18:38.339 | 99.00th=[ 85], 99.50th=[ 90], 99.90th=[ 95], 99.95th=[ 100], 00:18:38.339 | 99.99th=[ 101] 00:18:38.339 bw ( KiB/s): min=262656, max=566055, per=18.87%, avg=375205.55, stdev=124101.55, samples=20 00:18:38.339 iops : min= 1026, max= 2211, avg=1465.65, stdev=484.75, samples=20 00:18:38.339 lat (msec) : 20=2.21%, 50=60.76%, 100=37.02% 00:18:38.339 cpu : usr=0.46%, sys=4.21%, ctx=2552, majf=0, minf=4097 00:18:38.339 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:18:38.339 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:38.339 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:38.339 issued rwts: total=14726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:38.339 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:38.339 job9: (groupid=0, jobs=1): err= 0: pid=90537: Sun Jul 14 10:11:06 2024 00:18:38.339 read: IOPS=357, BW=89.5MiB/s (93.8MB/s)(909MiB/10158msec) 00:18:38.339 slat (usec): min=18, max=106653, avg=2700.61, stdev=9720.59 00:18:38.339 clat (msec): min=18, max=384, avg=175.85, stdev=54.23 00:18:38.339 lat (msec): min=18, max=384, avg=178.55, stdev=55.76 00:18:38.339 clat percentiles (msec): 00:18:38.339 | 1.00th=[ 25], 5.00th=[ 61], 10.00th=[ 75], 20.00th=[ 148], 00:18:38.339 | 30.00th=[ 157], 40.00th=[ 171], 50.00th=[ 182], 60.00th=[ 201], 00:18:38.340 | 70.00th=[ 209], 80.00th=[ 218], 90.00th=[ 228], 95.00th=[ 236], 00:18:38.340 | 99.00th=[ 326], 99.50th=[ 326], 99.90th=[ 384], 99.95th=[ 384], 00:18:38.340 | 99.99th=[ 384] 00:18:38.340 bw ( KiB/s): min=67072, max=218624, per=4.60%, avg=91413.95, stdev=32993.70, samples=20 00:18:38.340 iops : min= 262, max= 854, avg=357.00, stdev=128.92, samples=20 00:18:38.340 lat (msec) : 20=0.14%, 50=2.64%, 100=8.91%, 250=85.94%, 500=2.37% 00:18:38.340 cpu : usr=0.13%, sys=1.13%, ctx=886, majf=0, minf=4097 00:18:38.340 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.3% 00:18:38.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:38.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:38.340 issued rwts: total=3635,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:38.340 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:38.340 job10: (groupid=0, jobs=1): err= 0: pid=90538: Sun Jul 14 10:11:06 2024 00:18:38.340 read: IOPS=1821, BW=455MiB/s (478MB/s)(4560MiB/10013msec) 00:18:38.340 slat (usec): min=15, max=29223, avg=543.44, stdev=2262.06 00:18:38.340 clat (msec): min=5, max=108, avg=34.51, stdev=19.91 00:18:38.340 lat (msec): min=5, max=123, avg=35.06, stdev=20.26 00:18:38.340 clat percentiles (msec): 00:18:38.340 | 1.00th=[ 16], 5.00th=[ 19], 10.00th=[ 21], 20.00th=[ 23], 00:18:38.340 | 30.00th=[ 24], 40.00th=[ 26], 50.00th=[ 28], 60.00th=[ 31], 00:18:38.340 | 70.00th=[ 33], 80.00th=[ 37], 90.00th=[ 67], 95.00th=[ 88], 00:18:38.340 | 99.00th=[ 101], 99.50th=[ 103], 99.90th=[ 106], 99.95th=[ 106], 00:18:38.340 | 99.99th=[ 109] 00:18:38.340 bw ( KiB/s): min=167936, max=636678, per=23.40%, avg=465144.45, stdev=195331.12, samples=20 00:18:38.340 iops : min= 656, max= 2487, avg=1816.95, stdev=763.00, samples=20 00:18:38.340 lat (msec) : 10=0.12%, 20=8.83%, 50=75.28%, 100=14.84%, 250=0.94% 00:18:38.340 cpu : usr=0.51%, sys=4.89%, ctx=3696, majf=0, minf=4097 00:18:38.340 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:18:38.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:38.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:38.340 issued rwts: total=18240,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:38.340 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:38.340 00:18:38.340 Run status group 0 (all jobs): 00:18:38.340 READ: bw=1942MiB/s (2036MB/s), 86.9MiB/s-455MiB/s (91.1MB/s-478MB/s), io=19.3GiB (20.7GB), run=10013-10159msec 00:18:38.340 00:18:38.340 Disk stats (read/write): 00:18:38.340 nvme0n1: ios=19646/0, merge=0/0, ticks=1238921/0, in_queue=1238921, util=97.38% 00:18:38.340 nvme10n1: ios=7221/0, merge=0/0, ticks=1234500/0, in_queue=1234500, util=97.74% 00:18:38.340 nvme1n1: ios=9946/0, merge=0/0, ticks=1233507/0, in_queue=1233507, util=97.79% 00:18:38.340 nvme2n1: ios=7042/0, merge=0/0, ticks=1228740/0, in_queue=1228740, util=98.14% 00:18:38.340 nvme3n1: ios=7031/0, merge=0/0, ticks=1238090/0, in_queue=1238090, util=98.15% 00:18:38.340 nvme4n1: ios=13223/0, merge=0/0, ticks=1229502/0, in_queue=1229502, util=97.54% 00:18:38.340 nvme5n1: ios=12663/0, merge=0/0, ticks=1235203/0, in_queue=1235203, util=98.40% 00:18:38.340 nvme6n1: ios=6938/0, merge=0/0, ticks=1233795/0, in_queue=1233795, util=98.51% 00:18:38.340 nvme7n1: ios=29369/0, merge=0/0, ticks=1223962/0, in_queue=1223962, util=98.46% 00:18:38.340 nvme8n1: ios=7158/0, merge=0/0, ticks=1231535/0, in_queue=1231535, util=98.65% 00:18:38.340 nvme9n1: ios=35208/0, merge=0/0, ticks=1185504/0, in_queue=1185504, util=98.98% 00:18:38.340 10:11:06 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:18:38.340 [global] 00:18:38.340 thread=1 00:18:38.340 invalidate=1 00:18:38.340 rw=randwrite 00:18:38.340 time_based=1 00:18:38.340 runtime=10 00:18:38.340 ioengine=libaio 00:18:38.340 direct=1 00:18:38.340 bs=262144 00:18:38.340 iodepth=64 00:18:38.340 norandommap=1 00:18:38.340 numjobs=1 00:18:38.340 00:18:38.340 [job0] 00:18:38.340 filename=/dev/nvme0n1 00:18:38.340 [job1] 00:18:38.340 filename=/dev/nvme10n1 00:18:38.340 [job2] 00:18:38.340 filename=/dev/nvme1n1 00:18:38.340 [job3] 00:18:38.340 filename=/dev/nvme2n1 00:18:38.340 [job4] 00:18:38.340 filename=/dev/nvme3n1 00:18:38.340 [job5] 00:18:38.340 filename=/dev/nvme4n1 00:18:38.340 [job6] 00:18:38.340 filename=/dev/nvme5n1 00:18:38.340 [job7] 00:18:38.340 filename=/dev/nvme6n1 00:18:38.340 [job8] 00:18:38.340 filename=/dev/nvme7n1 00:18:38.340 [job9] 00:18:38.340 filename=/dev/nvme8n1 00:18:38.340 [job10] 00:18:38.340 filename=/dev/nvme9n1 00:18:38.340 Could not set queue depth (nvme0n1) 00:18:38.340 Could not set queue depth (nvme10n1) 00:18:38.340 Could not set queue depth (nvme1n1) 00:18:38.340 Could not set queue depth (nvme2n1) 00:18:38.340 Could not set queue depth (nvme3n1) 00:18:38.340 Could not set queue depth (nvme4n1) 00:18:38.340 Could not set queue depth (nvme5n1) 00:18:38.340 Could not set queue depth (nvme6n1) 00:18:38.340 Could not set queue depth (nvme7n1) 00:18:38.340 Could not set queue depth (nvme8n1) 00:18:38.340 Could not set queue depth (nvme9n1) 00:18:38.340 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:38.340 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:38.340 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:38.340 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:38.340 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:38.340 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:38.340 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:38.340 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:38.340 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:38.340 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:38.340 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:18:38.340 fio-3.35 00:18:38.340 Starting 11 threads 00:18:48.377 00:18:48.377 job0: (groupid=0, jobs=1): err= 0: pid=90733: Sun Jul 14 10:11:17 2024 00:18:48.377 write: IOPS=308, BW=77.2MiB/s (81.0MB/s)(784MiB/10150msec); 0 zone resets 00:18:48.377 slat (usec): min=17, max=141553, avg=3186.34, stdev=6853.86 00:18:48.377 clat (msec): min=141, max=398, avg=203.95, stdev=27.12 00:18:48.377 lat (msec): min=144, max=398, avg=207.13, stdev=26.53 00:18:48.377 clat percentiles (msec): 00:18:48.377 | 1.00th=[ 157], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 186], 00:18:48.377 | 30.00th=[ 192], 40.00th=[ 199], 50.00th=[ 201], 60.00th=[ 207], 00:18:48.377 | 70.00th=[ 213], 80.00th=[ 222], 90.00th=[ 228], 95.00th=[ 236], 00:18:48.377 | 99.00th=[ 313], 99.50th=[ 326], 99.90th=[ 347], 99.95th=[ 397], 00:18:48.377 | 99.99th=[ 401] 00:18:48.377 bw ( KiB/s): min=39424, max=86528, per=5.26%, avg=78634.80, stdev=10192.79, samples=20 00:18:48.377 iops : min= 154, max= 338, avg=307.15, stdev=39.81, samples=20 00:18:48.377 lat (msec) : 250=96.30%, 500=3.70% 00:18:48.377 cpu : usr=0.56%, sys=0.72%, ctx=3454, majf=0, minf=1 00:18:48.377 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:18:48.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.377 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:48.377 issued rwts: total=0,3135,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.377 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:48.377 job1: (groupid=0, jobs=1): err= 0: pid=90734: Sun Jul 14 10:11:17 2024 00:18:48.378 write: IOPS=553, BW=138MiB/s (145MB/s)(1399MiB/10106msec); 0 zone resets 00:18:48.378 slat (usec): min=19, max=71493, avg=1753.36, stdev=3226.29 00:18:48.378 clat (msec): min=9, max=311, avg=113.78, stdev=24.22 00:18:48.378 lat (msec): min=10, max=315, avg=115.53, stdev=24.31 00:18:48.378 clat percentiles (msec): 00:18:48.378 | 1.00th=[ 43], 5.00th=[ 103], 10.00th=[ 105], 20.00th=[ 109], 00:18:48.378 | 30.00th=[ 110], 40.00th=[ 111], 50.00th=[ 113], 60.00th=[ 115], 00:18:48.378 | 70.00th=[ 116], 80.00th=[ 117], 90.00th=[ 118], 95.00th=[ 120], 00:18:48.378 | 99.00th=[ 279], 99.50th=[ 305], 99.90th=[ 313], 99.95th=[ 313], 00:18:48.378 | 99.99th=[ 313] 00:18:48.378 bw ( KiB/s): min=102912, max=153600, per=9.47%, avg=141603.10, stdev=10113.39, samples=20 00:18:48.378 iops : min= 402, max= 600, avg=553.00, stdev=39.55, samples=20 00:18:48.378 lat (msec) : 10=0.02%, 20=0.14%, 50=1.09%, 100=1.68%, 250=96.02% 00:18:48.378 lat (msec) : 500=1.05% 00:18:48.378 cpu : usr=0.94%, sys=1.20%, ctx=7496, majf=0, minf=1 00:18:48.378 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:48.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:48.378 issued rwts: total=0,5596,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.378 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:48.378 job2: (groupid=0, jobs=1): err= 0: pid=90746: Sun Jul 14 10:11:17 2024 00:18:48.378 write: IOPS=312, BW=78.1MiB/s (81.8MB/s)(794MiB/10166msec); 0 zone resets 00:18:48.378 slat (usec): min=21, max=80581, avg=3144.70, stdev=6376.90 00:18:48.378 clat (msec): min=30, max=341, avg=201.73, stdev=30.39 00:18:48.378 lat (msec): min=30, max=341, avg=204.88, stdev=29.99 00:18:48.378 clat percentiles (msec): 00:18:48.378 | 1.00th=[ 107], 5.00th=[ 167], 10.00th=[ 174], 20.00th=[ 182], 00:18:48.378 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 199], 60.00th=[ 205], 00:18:48.378 | 70.00th=[ 215], 80.00th=[ 226], 90.00th=[ 232], 95.00th=[ 239], 00:18:48.378 | 99.00th=[ 305], 99.50th=[ 309], 99.90th=[ 334], 99.95th=[ 342], 00:18:48.378 | 99.99th=[ 342] 00:18:48.378 bw ( KiB/s): min=61317, max=90112, per=5.33%, avg=79618.00, stdev=7623.09, samples=20 00:18:48.378 iops : min= 239, max= 352, avg=310.95, stdev=29.81, samples=20 00:18:48.378 lat (msec) : 50=0.38%, 100=0.50%, 250=95.09%, 500=4.03% 00:18:48.378 cpu : usr=0.90%, sys=0.91%, ctx=2172, majf=0, minf=1 00:18:48.378 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:18:48.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:48.378 issued rwts: total=0,3174,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.378 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:48.378 job3: (groupid=0, jobs=1): err= 0: pid=90747: Sun Jul 14 10:11:17 2024 00:18:48.378 write: IOPS=558, BW=140MiB/s (146MB/s)(1411MiB/10108msec); 0 zone resets 00:18:48.378 slat (usec): min=19, max=70982, avg=1722.68, stdev=3168.07 00:18:48.378 clat (msec): min=3, max=295, avg=112.88, stdev=24.13 00:18:48.378 lat (msec): min=3, max=295, avg=114.60, stdev=24.31 00:18:48.378 clat percentiles (msec): 00:18:48.378 | 1.00th=[ 28], 5.00th=[ 103], 10.00th=[ 105], 20.00th=[ 109], 00:18:48.378 | 30.00th=[ 110], 40.00th=[ 111], 50.00th=[ 113], 60.00th=[ 115], 00:18:48.378 | 70.00th=[ 116], 80.00th=[ 117], 90.00th=[ 118], 95.00th=[ 120], 00:18:48.378 | 99.00th=[ 271], 99.50th=[ 279], 99.90th=[ 284], 99.95th=[ 288], 00:18:48.378 | 99.99th=[ 296] 00:18:48.378 bw ( KiB/s): min=123126, max=156672, per=9.56%, avg=142807.10, stdev=6604.58, samples=20 00:18:48.378 iops : min= 480, max= 612, avg=557.70, stdev=26.02, samples=20 00:18:48.378 lat (msec) : 4=0.02%, 10=0.32%, 20=0.46%, 50=0.90%, 100=2.53% 00:18:48.378 lat (msec) : 250=94.56%, 500=1.21% 00:18:48.378 cpu : usr=0.97%, sys=1.23%, ctx=7562, majf=0, minf=1 00:18:48.378 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:48.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:48.378 issued rwts: total=0,5642,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.378 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:48.378 job4: (groupid=0, jobs=1): err= 0: pid=90748: Sun Jul 14 10:11:17 2024 00:18:48.378 write: IOPS=585, BW=146MiB/s (153MB/s)(1478MiB/10097msec); 0 zone resets 00:18:48.378 slat (usec): min=17, max=11098, avg=1685.96, stdev=2907.69 00:18:48.378 clat (msec): min=4, max=210, avg=107.59, stdev=19.24 00:18:48.378 lat (msec): min=4, max=210, avg=109.27, stdev=19.33 00:18:48.378 clat percentiles (msec): 00:18:48.378 | 1.00th=[ 39], 5.00th=[ 46], 10.00th=[ 103], 20.00th=[ 109], 00:18:48.378 | 30.00th=[ 110], 40.00th=[ 110], 50.00th=[ 111], 60.00th=[ 115], 00:18:48.378 | 70.00th=[ 116], 80.00th=[ 117], 90.00th=[ 117], 95.00th=[ 118], 00:18:48.378 | 99.00th=[ 120], 99.50th=[ 153], 99.90th=[ 197], 99.95th=[ 203], 00:18:48.378 | 99.99th=[ 211] 00:18:48.378 bw ( KiB/s): min=136192, max=266240, per=10.01%, avg=149666.15, stdev=27756.01, samples=20 00:18:48.378 iops : min= 532, max= 1040, avg=584.50, stdev=108.46, samples=20 00:18:48.378 lat (msec) : 10=0.03%, 20=0.15%, 50=5.11%, 100=2.88%, 250=91.83% 00:18:48.378 cpu : usr=1.14%, sys=1.90%, ctx=7126, majf=0, minf=1 00:18:48.378 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:18:48.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:48.378 issued rwts: total=0,5911,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.378 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:48.378 job5: (groupid=0, jobs=1): err= 0: pid=90749: Sun Jul 14 10:11:17 2024 00:18:48.378 write: IOPS=818, BW=205MiB/s (215MB/s)(2060MiB/10065msec); 0 zone resets 00:18:48.378 slat (usec): min=18, max=8717, avg=1205.78, stdev=2046.23 00:18:48.378 clat (usec): min=1778, max=141957, avg=76944.11, stdev=9787.99 00:18:48.378 lat (msec): min=2, max=141, avg=78.15, stdev= 9.78 00:18:48.378 clat percentiles (msec): 00:18:48.378 | 1.00th=[ 21], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 77], 00:18:48.378 | 30.00th=[ 78], 40.00th=[ 78], 50.00th=[ 79], 60.00th=[ 80], 00:18:48.378 | 70.00th=[ 82], 80.00th=[ 82], 90.00th=[ 83], 95.00th=[ 83], 00:18:48.378 | 99.00th=[ 84], 99.50th=[ 92], 99.90th=[ 132], 99.95th=[ 138], 00:18:48.378 | 99.99th=[ 142] 00:18:48.378 bw ( KiB/s): min=198144, max=275968, per=14.01%, avg=209311.10, stdev=16590.71, samples=20 00:18:48.378 iops : min= 774, max= 1078, avg=817.60, stdev=64.82, samples=20 00:18:48.378 lat (msec) : 2=0.01%, 4=0.17%, 10=0.23%, 20=0.53%, 50=1.80% 00:18:48.378 lat (msec) : 100=96.84%, 250=0.41% 00:18:48.378 cpu : usr=1.63%, sys=2.25%, ctx=10372, majf=0, minf=1 00:18:48.378 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:48.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:48.378 issued rwts: total=0,8240,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.378 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:48.378 job6: (groupid=0, jobs=1): err= 0: pid=90750: Sun Jul 14 10:11:17 2024 00:18:48.378 write: IOPS=809, BW=202MiB/s (212MB/s)(2039MiB/10073msec); 0 zone resets 00:18:48.378 slat (usec): min=18, max=7157, avg=1189.95, stdev=2048.40 00:18:48.378 clat (msec): min=3, max=239, avg=77.81, stdev=11.77 00:18:48.378 lat (msec): min=3, max=239, avg=79.00, stdev=11.72 00:18:48.378 clat percentiles (msec): 00:18:48.378 | 1.00th=[ 40], 5.00th=[ 72], 10.00th=[ 73], 20.00th=[ 77], 00:18:48.378 | 30.00th=[ 78], 40.00th=[ 78], 50.00th=[ 79], 60.00th=[ 80], 00:18:48.378 | 70.00th=[ 81], 80.00th=[ 82], 90.00th=[ 83], 95.00th=[ 83], 00:18:48.378 | 99.00th=[ 88], 99.50th=[ 138], 99.90th=[ 220], 99.95th=[ 230], 00:18:48.378 | 99.99th=[ 241] 00:18:48.378 bw ( KiB/s): min=200192, max=234496, per=13.88%, avg=207427.50, stdev=8202.64, samples=20 00:18:48.378 iops : min= 782, max= 916, avg=809.90, stdev=32.10, samples=20 00:18:48.378 lat (msec) : 4=0.06%, 10=0.05%, 20=0.21%, 50=2.19%, 100=96.60% 00:18:48.378 lat (msec) : 250=0.88% 00:18:48.378 cpu : usr=1.32%, sys=1.72%, ctx=11544, majf=0, minf=1 00:18:48.378 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:18:48.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:48.378 issued rwts: total=0,8157,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.378 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:48.378 job7: (groupid=0, jobs=1): err= 0: pid=90751: Sun Jul 14 10:11:17 2024 00:18:48.378 write: IOPS=314, BW=78.7MiB/s (82.5MB/s)(801MiB/10176msec); 0 zone resets 00:18:48.378 slat (usec): min=21, max=110699, avg=3119.95, stdev=6367.18 00:18:48.378 clat (msec): min=6, max=346, avg=200.06, stdev=32.45 00:18:48.378 lat (msec): min=6, max=346, avg=203.18, stdev=32.10 00:18:48.378 clat percentiles (msec): 00:18:48.378 | 1.00th=[ 96], 5.00th=[ 165], 10.00th=[ 171], 20.00th=[ 180], 00:18:48.378 | 30.00th=[ 186], 40.00th=[ 192], 50.00th=[ 197], 60.00th=[ 203], 00:18:48.378 | 70.00th=[ 213], 80.00th=[ 224], 90.00th=[ 232], 95.00th=[ 243], 00:18:48.378 | 99.00th=[ 305], 99.50th=[ 313], 99.90th=[ 338], 99.95th=[ 347], 00:18:48.378 | 99.99th=[ 347] 00:18:48.378 bw ( KiB/s): min=63488, max=92344, per=5.39%, avg=80480.95, stdev=7470.35, samples=20 00:18:48.378 iops : min= 248, max= 360, avg=314.00, stdev=29.12, samples=20 00:18:48.378 lat (msec) : 10=0.25%, 50=0.25%, 100=0.62%, 250=94.32%, 500=4.56% 00:18:48.378 cpu : usr=0.63%, sys=0.84%, ctx=1940, majf=0, minf=1 00:18:48.378 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:18:48.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:48.378 issued rwts: total=0,3204,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.378 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:48.378 job8: (groupid=0, jobs=1): err= 0: pid=90752: Sun Jul 14 10:11:17 2024 00:18:48.378 write: IOPS=315, BW=78.8MiB/s (82.6MB/s)(801MiB/10158msec); 0 zone resets 00:18:48.378 slat (usec): min=18, max=73230, avg=3119.64, stdev=6208.58 00:18:48.378 clat (msec): min=70, max=329, avg=199.83, stdev=25.94 00:18:48.378 lat (msec): min=70, max=329, avg=202.95, stdev=25.37 00:18:48.378 clat percentiles (msec): 00:18:48.378 | 1.00th=[ 153], 5.00th=[ 167], 10.00th=[ 171], 20.00th=[ 182], 00:18:48.378 | 30.00th=[ 190], 40.00th=[ 194], 50.00th=[ 199], 60.00th=[ 203], 00:18:48.378 | 70.00th=[ 207], 80.00th=[ 218], 90.00th=[ 224], 95.00th=[ 228], 00:18:48.378 | 99.00th=[ 305], 99.50th=[ 313], 99.90th=[ 321], 99.95th=[ 330], 00:18:48.378 | 99.99th=[ 330] 00:18:48.378 bw ( KiB/s): min=52224, max=88064, per=5.38%, avg=80350.40, stdev=7976.65, samples=20 00:18:48.378 iops : min= 204, max= 344, avg=313.85, stdev=31.16, samples=20 00:18:48.378 lat (msec) : 100=0.31%, 250=96.38%, 500=3.31% 00:18:48.378 cpu : usr=0.63%, sys=0.93%, ctx=3303, majf=0, minf=1 00:18:48.379 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:18:48.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:48.379 issued rwts: total=0,3202,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.379 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:48.379 job9: (groupid=0, jobs=1): err= 0: pid=90753: Sun Jul 14 10:11:17 2024 00:18:48.379 write: IOPS=311, BW=78.0MiB/s (81.8MB/s)(792MiB/10154msec); 0 zone resets 00:18:48.379 slat (usec): min=20, max=76883, avg=3152.36, stdev=6324.10 00:18:48.379 clat (msec): min=56, max=350, avg=201.86, stdev=28.47 00:18:48.379 lat (msec): min=56, max=362, avg=205.01, stdev=27.96 00:18:48.379 clat percentiles (msec): 00:18:48.379 | 1.00th=[ 148], 5.00th=[ 165], 10.00th=[ 174], 20.00th=[ 184], 00:18:48.379 | 30.00th=[ 190], 40.00th=[ 197], 50.00th=[ 201], 60.00th=[ 205], 00:18:48.379 | 70.00th=[ 213], 80.00th=[ 222], 90.00th=[ 228], 95.00th=[ 232], 00:18:48.379 | 99.00th=[ 313], 99.50th=[ 321], 99.90th=[ 351], 99.95th=[ 351], 00:18:48.379 | 99.99th=[ 351] 00:18:48.379 bw ( KiB/s): min=55296, max=86528, per=5.32%, avg=79479.35, stdev=7495.96, samples=20 00:18:48.379 iops : min= 216, max= 338, avg=310.45, stdev=29.27, samples=20 00:18:48.379 lat (msec) : 100=0.51%, 250=96.28%, 500=3.22% 00:18:48.379 cpu : usr=0.64%, sys=0.61%, ctx=3571, majf=0, minf=1 00:18:48.379 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:18:48.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:48.379 issued rwts: total=0,3168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.379 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:48.379 job10: (groupid=0, jobs=1): err= 0: pid=90754: Sun Jul 14 10:11:17 2024 00:18:48.379 write: IOPS=993, BW=248MiB/s (261MB/s)(2495MiB/10039msec); 0 zone resets 00:18:48.379 slat (usec): min=17, max=119994, avg=976.28, stdev=3055.30 00:18:48.379 clat (usec): min=1302, max=364898, avg=63395.51, stdev=58975.14 00:18:48.379 lat (usec): min=1960, max=364990, avg=64371.79, stdev=59818.24 00:18:48.379 clat percentiles (msec): 00:18:48.379 | 1.00th=[ 16], 5.00th=[ 40], 10.00th=[ 41], 20.00th=[ 42], 00:18:48.379 | 30.00th=[ 42], 40.00th=[ 43], 50.00th=[ 43], 60.00th=[ 44], 00:18:48.379 | 70.00th=[ 44], 80.00th=[ 45], 90.00th=[ 199], 95.00th=[ 222], 00:18:48.379 | 99.00th=[ 236], 99.50th=[ 300], 99.90th=[ 321], 99.95th=[ 326], 00:18:48.379 | 99.99th=[ 363] 00:18:48.379 bw ( KiB/s): min=43008, max=383488, per=16.98%, avg=253715.05, stdev=152856.21, samples=20 00:18:48.379 iops : min= 168, max= 1498, avg=991.05, stdev=597.08, samples=20 00:18:48.379 lat (msec) : 2=0.02%, 4=0.12%, 10=0.46%, 20=1.23%, 50=85.32% 00:18:48.379 lat (msec) : 100=0.46%, 250=11.50%, 500=0.89% 00:18:48.379 cpu : usr=1.64%, sys=2.30%, ctx=11201, majf=0, minf=1 00:18:48.379 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:48.379 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:48.379 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:18:48.379 issued rwts: total=0,9978,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:48.379 latency : target=0, window=0, percentile=100.00%, depth=64 00:18:48.379 00:18:48.379 Run status group 0 (all jobs): 00:18:48.379 WRITE: bw=1459MiB/s (1530MB/s), 77.2MiB/s-248MiB/s (81.0MB/s-261MB/s), io=14.5GiB (15.6GB), run=10039-10176msec 00:18:48.379 00:18:48.379 Disk stats (read/write): 00:18:48.379 nvme0n1: ios=49/6114, merge=0/0, ticks=24/1203734, in_queue=1203758, util=97.54% 00:18:48.379 nvme10n1: ios=49/11056, merge=0/0, ticks=65/1215382, in_queue=1215447, util=98.00% 00:18:48.379 nvme1n1: ios=31/6219, merge=0/0, ticks=21/1208029, in_queue=1208050, util=98.01% 00:18:48.379 nvme2n1: ios=0/11149, merge=0/0, ticks=0/1215601, in_queue=1215601, util=97.96% 00:18:48.379 nvme3n1: ios=22/11669, merge=0/0, ticks=17/1212411, in_queue=1212428, util=97.98% 00:18:48.379 nvme4n1: ios=0/16330, merge=0/0, ticks=0/1215532, in_queue=1215532, util=98.21% 00:18:48.379 nvme5n1: ios=0/16193, merge=0/0, ticks=0/1219565, in_queue=1219565, util=98.46% 00:18:48.379 nvme6n1: ios=0/6284, merge=0/0, ticks=0/1208740, in_queue=1208740, util=98.51% 00:18:48.379 nvme7n1: ios=0/6265, merge=0/0, ticks=0/1206703, in_queue=1206703, util=98.53% 00:18:48.379 nvme8n1: ios=0/6205, merge=0/0, ticks=0/1204898, in_queue=1204898, util=98.67% 00:18:48.379 nvme9n1: ios=0/19756, merge=0/0, ticks=0/1216542, in_queue=1216542, util=98.70% 00:18:48.379 10:11:17 -- target/multiconnection.sh@36 -- # sync 00:18:48.379 10:11:17 -- target/multiconnection.sh@37 -- # seq 1 11 00:18:48.379 10:11:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:48.379 10:11:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:48.379 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:48.379 10:11:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:18:48.379 10:11:17 -- common/autotest_common.sh@1198 -- # local i=0 00:18:48.379 10:11:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:48.379 10:11:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:18:48.379 10:11:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:48.379 10:11:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:18:48.379 10:11:17 -- common/autotest_common.sh@1210 -- # return 0 00:18:48.379 10:11:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:48.379 10:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:48.379 10:11:17 -- common/autotest_common.sh@10 -- # set +x 00:18:48.379 10:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:48.379 10:11:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:48.379 10:11:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:18:48.379 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:18:48.379 10:11:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:18:48.379 10:11:17 -- common/autotest_common.sh@1198 -- # local i=0 00:18:48.379 10:11:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:48.379 10:11:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:18:48.379 10:11:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:48.379 10:11:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:18:48.379 10:11:17 -- common/autotest_common.sh@1210 -- # return 0 00:18:48.379 10:11:17 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:18:48.379 10:11:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:48.379 10:11:17 -- common/autotest_common.sh@10 -- # set +x 00:18:48.379 10:11:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:48.379 10:11:17 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:48.379 10:11:17 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:18:48.379 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:18:48.379 10:11:17 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:18:48.379 10:11:17 -- common/autotest_common.sh@1198 -- # local i=0 00:18:48.379 10:11:17 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:48.379 10:11:17 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:18:48.379 10:11:17 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:48.379 10:11:17 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:18:48.379 10:11:18 -- common/autotest_common.sh@1210 -- # return 0 00:18:48.379 10:11:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:18:48.379 10:11:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:48.379 10:11:18 -- common/autotest_common.sh@10 -- # set +x 00:18:48.379 10:11:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:48.379 10:11:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:48.379 10:11:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:18:48.379 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:18:48.379 10:11:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:18:48.379 10:11:18 -- common/autotest_common.sh@1198 -- # local i=0 00:18:48.379 10:11:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:48.379 10:11:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:18:48.379 10:11:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:48.379 10:11:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:18:48.379 10:11:18 -- common/autotest_common.sh@1210 -- # return 0 00:18:48.379 10:11:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:18:48.379 10:11:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:48.379 10:11:18 -- common/autotest_common.sh@10 -- # set +x 00:18:48.379 10:11:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:48.379 10:11:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:48.379 10:11:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:18:48.379 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:18:48.379 10:11:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:18:48.379 10:11:18 -- common/autotest_common.sh@1198 -- # local i=0 00:18:48.379 10:11:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:48.379 10:11:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:18:48.379 10:11:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:48.379 10:11:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:18:48.379 10:11:18 -- common/autotest_common.sh@1210 -- # return 0 00:18:48.379 10:11:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:18:48.379 10:11:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:48.379 10:11:18 -- common/autotest_common.sh@10 -- # set +x 00:18:48.379 10:11:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:48.379 10:11:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:48.379 10:11:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:18:48.379 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:18:48.379 10:11:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:18:48.379 10:11:18 -- common/autotest_common.sh@1198 -- # local i=0 00:18:48.379 10:11:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:48.379 10:11:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:18:48.379 10:11:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:48.379 10:11:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:18:48.379 10:11:18 -- common/autotest_common.sh@1210 -- # return 0 00:18:48.379 10:11:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:18:48.379 10:11:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:48.379 10:11:18 -- common/autotest_common.sh@10 -- # set +x 00:18:48.379 10:11:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:48.379 10:11:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:48.379 10:11:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:18:48.379 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:18:48.379 10:11:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:18:48.379 10:11:18 -- common/autotest_common.sh@1198 -- # local i=0 00:18:48.379 10:11:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:48.379 10:11:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:18:48.379 10:11:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:48.379 10:11:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:18:48.379 10:11:18 -- common/autotest_common.sh@1210 -- # return 0 00:18:48.379 10:11:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:18:48.380 10:11:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:48.380 10:11:18 -- common/autotest_common.sh@10 -- # set +x 00:18:48.380 10:11:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:48.380 10:11:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:48.380 10:11:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:18:48.380 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:18:48.380 10:11:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:18:48.380 10:11:18 -- common/autotest_common.sh@1198 -- # local i=0 00:18:48.380 10:11:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:48.380 10:11:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:18:48.380 10:11:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:48.380 10:11:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:18:48.380 10:11:18 -- common/autotest_common.sh@1210 -- # return 0 00:18:48.380 10:11:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:18:48.380 10:11:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:48.380 10:11:18 -- common/autotest_common.sh@10 -- # set +x 00:18:48.380 10:11:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:48.380 10:11:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:48.380 10:11:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:18:48.380 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:18:48.380 10:11:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:18:48.380 10:11:18 -- common/autotest_common.sh@1198 -- # local i=0 00:18:48.380 10:11:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:48.380 10:11:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:18:48.380 10:11:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:48.380 10:11:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:18:48.380 10:11:18 -- common/autotest_common.sh@1210 -- # return 0 00:18:48.380 10:11:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:18:48.380 10:11:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:48.380 10:11:18 -- common/autotest_common.sh@10 -- # set +x 00:18:48.380 10:11:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:48.380 10:11:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:48.380 10:11:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:18:48.380 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:18:48.380 10:11:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:18:48.380 10:11:18 -- common/autotest_common.sh@1198 -- # local i=0 00:18:48.380 10:11:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:48.380 10:11:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:18:48.380 10:11:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:48.380 10:11:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:18:48.380 10:11:18 -- common/autotest_common.sh@1210 -- # return 0 00:18:48.380 10:11:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:18:48.380 10:11:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:48.380 10:11:18 -- common/autotest_common.sh@10 -- # set +x 00:18:48.380 10:11:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:48.380 10:11:18 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:48.380 10:11:18 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:18:48.380 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:18:48.380 10:11:18 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:18:48.380 10:11:18 -- common/autotest_common.sh@1198 -- # local i=0 00:18:48.380 10:11:18 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:18:48.380 10:11:18 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:18:48.380 10:11:18 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:18:48.380 10:11:18 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:18:48.380 10:11:18 -- common/autotest_common.sh@1210 -- # return 0 00:18:48.380 10:11:18 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:18:48.380 10:11:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:48.380 10:11:18 -- common/autotest_common.sh@10 -- # set +x 00:18:48.380 10:11:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:48.380 10:11:18 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:18:48.380 10:11:18 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:48.380 10:11:18 -- target/multiconnection.sh@47 -- # nvmftestfini 00:18:48.380 10:11:18 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:48.380 10:11:18 -- nvmf/common.sh@116 -- # sync 00:18:48.380 10:11:18 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:48.380 10:11:18 -- nvmf/common.sh@119 -- # set +e 00:18:48.380 10:11:18 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:48.380 10:11:18 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:48.380 rmmod nvme_tcp 00:18:48.380 rmmod nvme_fabrics 00:18:48.380 rmmod nvme_keyring 00:18:48.380 10:11:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:48.380 10:11:18 -- nvmf/common.sh@123 -- # set -e 00:18:48.380 10:11:18 -- nvmf/common.sh@124 -- # return 0 00:18:48.380 10:11:18 -- nvmf/common.sh@477 -- # '[' -n 90050 ']' 00:18:48.380 10:11:18 -- nvmf/common.sh@478 -- # killprocess 90050 00:18:48.380 10:11:18 -- common/autotest_common.sh@926 -- # '[' -z 90050 ']' 00:18:48.380 10:11:18 -- common/autotest_common.sh@930 -- # kill -0 90050 00:18:48.380 10:11:18 -- common/autotest_common.sh@931 -- # uname 00:18:48.380 10:11:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:48.380 10:11:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 90050 00:18:48.380 killing process with pid 90050 00:18:48.380 10:11:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:48.380 10:11:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:48.380 10:11:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 90050' 00:18:48.380 10:11:18 -- common/autotest_common.sh@945 -- # kill 90050 00:18:48.380 10:11:18 -- common/autotest_common.sh@950 -- # wait 90050 00:18:48.948 10:11:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:48.948 10:11:19 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:48.948 10:11:19 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:48.948 10:11:19 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:48.948 10:11:19 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:48.948 10:11:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.948 10:11:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:48.948 10:11:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.948 10:11:19 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:48.948 00:18:48.948 real 0m49.515s 00:18:48.948 user 2m45.331s 00:18:48.948 sys 0m26.189s 00:18:48.948 10:11:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:48.948 ************************************ 00:18:48.948 10:11:19 -- common/autotest_common.sh@10 -- # set +x 00:18:48.948 END TEST nvmf_multiconnection 00:18:48.948 ************************************ 00:18:48.948 10:11:19 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:48.948 10:11:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:48.948 10:11:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:48.948 10:11:19 -- common/autotest_common.sh@10 -- # set +x 00:18:48.948 ************************************ 00:18:48.948 START TEST nvmf_initiator_timeout 00:18:48.948 ************************************ 00:18:48.948 10:11:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:18:48.948 * Looking for test storage... 00:18:48.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:48.948 10:11:19 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:48.948 10:11:19 -- nvmf/common.sh@7 -- # uname -s 00:18:48.948 10:11:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:48.948 10:11:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:48.948 10:11:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:48.948 10:11:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:48.948 10:11:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:48.948 10:11:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:48.948 10:11:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:48.948 10:11:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:48.948 10:11:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:48.948 10:11:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:48.948 10:11:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:18:48.948 10:11:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:18:48.948 10:11:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:48.948 10:11:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:48.948 10:11:19 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:48.948 10:11:19 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:48.948 10:11:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:48.948 10:11:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:48.948 10:11:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:48.948 10:11:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.948 10:11:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.948 10:11:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.949 10:11:19 -- paths/export.sh@5 -- # export PATH 00:18:48.949 10:11:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:48.949 10:11:19 -- nvmf/common.sh@46 -- # : 0 00:18:48.949 10:11:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:48.949 10:11:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:48.949 10:11:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:48.949 10:11:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:48.949 10:11:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:48.949 10:11:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:48.949 10:11:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:48.949 10:11:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:48.949 10:11:19 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:48.949 10:11:19 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:48.949 10:11:19 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:18:48.949 10:11:19 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:48.949 10:11:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:48.949 10:11:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:48.949 10:11:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:48.949 10:11:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:48.949 10:11:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:48.949 10:11:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:48.949 10:11:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:48.949 10:11:19 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:48.949 10:11:19 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:48.949 10:11:19 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:48.949 10:11:19 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:48.949 10:11:19 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:48.949 10:11:19 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:48.949 10:11:19 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:48.949 10:11:19 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:48.949 10:11:19 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:48.949 10:11:19 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:48.949 10:11:19 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:48.949 10:11:19 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:48.949 10:11:19 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:48.949 10:11:19 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:48.949 10:11:19 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:48.949 10:11:19 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:48.949 10:11:19 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:48.949 10:11:19 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:48.949 10:11:19 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:48.949 10:11:19 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:48.949 Cannot find device "nvmf_tgt_br" 00:18:48.949 10:11:19 -- nvmf/common.sh@154 -- # true 00:18:48.949 10:11:19 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:48.949 Cannot find device "nvmf_tgt_br2" 00:18:48.949 10:11:19 -- nvmf/common.sh@155 -- # true 00:18:48.949 10:11:19 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:49.208 10:11:19 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:49.208 Cannot find device "nvmf_tgt_br" 00:18:49.208 10:11:19 -- nvmf/common.sh@157 -- # true 00:18:49.208 10:11:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:49.208 Cannot find device "nvmf_tgt_br2" 00:18:49.208 10:11:19 -- nvmf/common.sh@158 -- # true 00:18:49.208 10:11:19 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:49.208 10:11:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:49.208 10:11:19 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:49.208 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:49.208 10:11:19 -- nvmf/common.sh@161 -- # true 00:18:49.208 10:11:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:49.208 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:49.208 10:11:19 -- nvmf/common.sh@162 -- # true 00:18:49.208 10:11:19 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:49.208 10:11:19 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:49.208 10:11:19 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:49.208 10:11:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:49.208 10:11:19 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:49.208 10:11:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:49.208 10:11:19 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:49.208 10:11:19 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:49.208 10:11:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:49.208 10:11:19 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:49.208 10:11:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:49.208 10:11:19 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:49.208 10:11:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:49.208 10:11:19 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:49.208 10:11:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:49.208 10:11:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:49.208 10:11:19 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:49.208 10:11:19 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:49.208 10:11:19 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:49.208 10:11:19 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:49.208 10:11:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:49.466 10:11:19 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:49.466 10:11:19 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:49.466 10:11:19 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:49.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:49.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:18:49.466 00:18:49.466 --- 10.0.0.2 ping statistics --- 00:18:49.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.466 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:18:49.466 10:11:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:49.466 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:49.466 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:18:49.466 00:18:49.466 --- 10.0.0.3 ping statistics --- 00:18:49.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.466 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:18:49.466 10:11:19 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:49.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:49.467 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:18:49.467 00:18:49.467 --- 10.0.0.1 ping statistics --- 00:18:49.467 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.467 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:18:49.467 10:11:19 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:49.467 10:11:19 -- nvmf/common.sh@421 -- # return 0 00:18:49.467 10:11:19 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:49.467 10:11:19 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:49.467 10:11:19 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:49.467 10:11:19 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:49.467 10:11:19 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:49.467 10:11:19 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:49.467 10:11:19 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:49.467 10:11:19 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:18:49.467 10:11:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:49.467 10:11:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:49.467 10:11:19 -- common/autotest_common.sh@10 -- # set +x 00:18:49.467 10:11:19 -- nvmf/common.sh@469 -- # nvmfpid=91124 00:18:49.467 10:11:19 -- nvmf/common.sh@470 -- # waitforlisten 91124 00:18:49.467 10:11:19 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:49.467 10:11:19 -- common/autotest_common.sh@819 -- # '[' -z 91124 ']' 00:18:49.467 10:11:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.467 10:11:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:49.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.467 10:11:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.467 10:11:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:49.467 10:11:19 -- common/autotest_common.sh@10 -- # set +x 00:18:49.467 [2024-07-14 10:11:19.868634] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:18:49.467 [2024-07-14 10:11:19.868707] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.467 [2024-07-14 10:11:20.009686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:49.725 [2024-07-14 10:11:20.086911] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:49.725 [2024-07-14 10:11:20.087040] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.725 [2024-07-14 10:11:20.087053] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.725 [2024-07-14 10:11:20.087061] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.725 [2024-07-14 10:11:20.087547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.725 [2024-07-14 10:11:20.087658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.725 [2024-07-14 10:11:20.087989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:49.725 [2024-07-14 10:11:20.087994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:50.292 10:11:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:50.292 10:11:20 -- common/autotest_common.sh@852 -- # return 0 00:18:50.292 10:11:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:50.292 10:11:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:50.292 10:11:20 -- common/autotest_common.sh@10 -- # set +x 00:18:50.553 10:11:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:50.553 10:11:20 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:50.553 10:11:20 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:50.553 10:11:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.553 10:11:20 -- common/autotest_common.sh@10 -- # set +x 00:18:50.553 Malloc0 00:18:50.553 10:11:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.553 10:11:20 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:18:50.553 10:11:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.553 10:11:20 -- common/autotest_common.sh@10 -- # set +x 00:18:50.553 Delay0 00:18:50.553 10:11:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.553 10:11:20 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:50.553 10:11:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.553 10:11:20 -- common/autotest_common.sh@10 -- # set +x 00:18:50.553 [2024-07-14 10:11:20.908004] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.553 10:11:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.553 10:11:20 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:50.553 10:11:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.553 10:11:20 -- common/autotest_common.sh@10 -- # set +x 00:18:50.553 10:11:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.553 10:11:20 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:18:50.553 10:11:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.553 10:11:20 -- common/autotest_common.sh@10 -- # set +x 00:18:50.553 10:11:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.553 10:11:20 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:50.553 10:11:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:50.553 10:11:20 -- common/autotest_common.sh@10 -- # set +x 00:18:50.553 [2024-07-14 10:11:20.936184] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:50.553 10:11:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:50.553 10:11:20 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:50.811 10:11:21 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:18:50.811 10:11:21 -- common/autotest_common.sh@1177 -- # local i=0 00:18:50.811 10:11:21 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:18:50.811 10:11:21 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:18:50.811 10:11:21 -- common/autotest_common.sh@1184 -- # sleep 2 00:18:52.712 10:11:23 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:18:52.712 10:11:23 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:18:52.712 10:11:23 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:18:52.712 10:11:23 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:18:52.712 10:11:23 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:18:52.712 10:11:23 -- common/autotest_common.sh@1187 -- # return 0 00:18:52.712 10:11:23 -- target/initiator_timeout.sh@35 -- # fio_pid=91205 00:18:52.712 10:11:23 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:18:52.712 10:11:23 -- target/initiator_timeout.sh@37 -- # sleep 3 00:18:52.712 [global] 00:18:52.712 thread=1 00:18:52.712 invalidate=1 00:18:52.712 rw=write 00:18:52.712 time_based=1 00:18:52.712 runtime=60 00:18:52.712 ioengine=libaio 00:18:52.712 direct=1 00:18:52.712 bs=4096 00:18:52.712 iodepth=1 00:18:52.712 norandommap=0 00:18:52.712 numjobs=1 00:18:52.712 00:18:52.712 verify_dump=1 00:18:52.712 verify_backlog=512 00:18:52.712 verify_state_save=0 00:18:52.712 do_verify=1 00:18:52.712 verify=crc32c-intel 00:18:52.712 [job0] 00:18:52.712 filename=/dev/nvme0n1 00:18:52.712 Could not set queue depth (nvme0n1) 00:18:52.969 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:52.969 fio-3.35 00:18:52.969 Starting 1 thread 00:18:56.251 10:11:26 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:18:56.251 10:11:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:56.251 10:11:26 -- common/autotest_common.sh@10 -- # set +x 00:18:56.251 true 00:18:56.251 10:11:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:56.251 10:11:26 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:18:56.251 10:11:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:56.251 10:11:26 -- common/autotest_common.sh@10 -- # set +x 00:18:56.251 true 00:18:56.251 10:11:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:56.251 10:11:26 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:18:56.251 10:11:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:56.251 10:11:26 -- common/autotest_common.sh@10 -- # set +x 00:18:56.251 true 00:18:56.251 10:11:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:56.251 10:11:26 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:18:56.251 10:11:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:56.251 10:11:26 -- common/autotest_common.sh@10 -- # set +x 00:18:56.251 true 00:18:56.251 10:11:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:56.251 10:11:26 -- target/initiator_timeout.sh@45 -- # sleep 3 00:18:58.806 10:11:29 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:18:58.806 10:11:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.806 10:11:29 -- common/autotest_common.sh@10 -- # set +x 00:18:58.806 true 00:18:58.806 10:11:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.806 10:11:29 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:18:58.806 10:11:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.806 10:11:29 -- common/autotest_common.sh@10 -- # set +x 00:18:58.806 true 00:18:58.806 10:11:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.806 10:11:29 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:18:58.806 10:11:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.806 10:11:29 -- common/autotest_common.sh@10 -- # set +x 00:18:58.806 true 00:18:58.806 10:11:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.806 10:11:29 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:18:58.806 10:11:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:18:58.806 10:11:29 -- common/autotest_common.sh@10 -- # set +x 00:18:58.806 true 00:18:58.806 10:11:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:18:58.806 10:11:29 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:18:58.806 10:11:29 -- target/initiator_timeout.sh@54 -- # wait 91205 00:19:55.074 00:19:55.074 job0: (groupid=0, jobs=1): err= 0: pid=91227: Sun Jul 14 10:12:23 2024 00:19:55.074 read: IOPS=900, BW=3601KiB/s (3688kB/s)(211MiB/60000msec) 00:19:55.074 slat (usec): min=12, max=17547, avg=16.07, stdev=84.02 00:19:55.074 clat (usec): min=122, max=40419k, avg=926.49, stdev=173903.52 00:19:55.074 lat (usec): min=162, max=40419k, avg=942.56, stdev=173903.54 00:19:55.074 clat percentiles (usec): 00:19:55.074 | 1.00th=[ 157], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:19:55.074 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 178], 00:19:55.074 | 70.00th=[ 184], 80.00th=[ 192], 90.00th=[ 200], 95.00th=[ 208], 00:19:55.074 | 99.00th=[ 227], 99.50th=[ 241], 99.90th=[ 529], 99.95th=[ 603], 00:19:55.074 | 99.99th=[ 1020] 00:19:55.074 write: IOPS=904, BW=3618KiB/s (3705kB/s)(212MiB/60000msec); 0 zone resets 00:19:55.074 slat (usec): min=17, max=484, avg=22.14, stdev= 6.18 00:19:55.074 clat (usec): min=113, max=2252, avg=141.50, stdev=23.88 00:19:55.074 lat (usec): min=136, max=2273, avg=163.63, stdev=25.04 00:19:55.074 clat percentiles (usec): 00:19:55.074 | 1.00th=[ 124], 5.00th=[ 126], 10.00th=[ 128], 20.00th=[ 130], 00:19:55.074 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 137], 60.00th=[ 141], 00:19:55.074 | 70.00th=[ 145], 80.00th=[ 151], 90.00th=[ 161], 95.00th=[ 169], 00:19:55.074 | 99.00th=[ 188], 99.50th=[ 198], 99.90th=[ 469], 99.95th=[ 545], 00:19:55.074 | 99.99th=[ 635] 00:19:55.074 bw ( KiB/s): min= 6592, max=12288, per=100.00%, avg=10877.33, stdev=1430.89, samples=39 00:19:55.074 iops : min= 1648, max= 3072, avg=2719.33, stdev=357.72, samples=39 00:19:55.074 lat (usec) : 250=99.65%, 500=0.24%, 750=0.09%, 1000=0.01% 00:19:55.074 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:19:55.074 cpu : usr=0.65%, sys=2.54%, ctx=108299, majf=0, minf=2 00:19:55.074 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:55.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.074 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:55.074 issued rwts: total=54020,54272,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:55.074 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:55.074 00:19:55.074 Run status group 0 (all jobs): 00:19:55.074 READ: bw=3601KiB/s (3688kB/s), 3601KiB/s-3601KiB/s (3688kB/s-3688kB/s), io=211MiB (221MB), run=60000-60000msec 00:19:55.074 WRITE: bw=3618KiB/s (3705kB/s), 3618KiB/s-3618KiB/s (3705kB/s-3705kB/s), io=212MiB (222MB), run=60000-60000msec 00:19:55.074 00:19:55.074 Disk stats (read/write): 00:19:55.074 nvme0n1: ios=54034/53993, merge=0/0, ticks=9948/8125, in_queue=18073, util=99.87% 00:19:55.074 10:12:23 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:55.074 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:55.074 10:12:23 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:55.074 10:12:23 -- common/autotest_common.sh@1198 -- # local i=0 00:19:55.074 10:12:23 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:19:55.074 10:12:23 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:55.074 10:12:23 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:19:55.074 10:12:23 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:55.074 10:12:23 -- common/autotest_common.sh@1210 -- # return 0 00:19:55.074 10:12:23 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:19:55.074 nvmf hotplug test: fio successful as expected 00:19:55.074 10:12:23 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:19:55.074 10:12:23 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:55.074 10:12:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.074 10:12:23 -- common/autotest_common.sh@10 -- # set +x 00:19:55.074 10:12:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.074 10:12:23 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:19:55.074 10:12:23 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:19:55.074 10:12:23 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:19:55.074 10:12:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:55.074 10:12:23 -- nvmf/common.sh@116 -- # sync 00:19:55.074 10:12:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:55.074 10:12:23 -- nvmf/common.sh@119 -- # set +e 00:19:55.075 10:12:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:55.075 10:12:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:55.075 rmmod nvme_tcp 00:19:55.075 rmmod nvme_fabrics 00:19:55.075 rmmod nvme_keyring 00:19:55.075 10:12:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:55.075 10:12:23 -- nvmf/common.sh@123 -- # set -e 00:19:55.075 10:12:23 -- nvmf/common.sh@124 -- # return 0 00:19:55.075 10:12:23 -- nvmf/common.sh@477 -- # '[' -n 91124 ']' 00:19:55.075 10:12:23 -- nvmf/common.sh@478 -- # killprocess 91124 00:19:55.075 10:12:23 -- common/autotest_common.sh@926 -- # '[' -z 91124 ']' 00:19:55.075 10:12:23 -- common/autotest_common.sh@930 -- # kill -0 91124 00:19:55.075 10:12:23 -- common/autotest_common.sh@931 -- # uname 00:19:55.075 10:12:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:55.075 10:12:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 91124 00:19:55.075 10:12:23 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:55.075 10:12:23 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:55.075 killing process with pid 91124 00:19:55.075 10:12:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 91124' 00:19:55.075 10:12:23 -- common/autotest_common.sh@945 -- # kill 91124 00:19:55.075 10:12:23 -- common/autotest_common.sh@950 -- # wait 91124 00:19:55.075 10:12:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:55.075 10:12:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:55.075 10:12:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:55.075 10:12:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:55.075 10:12:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:55.075 10:12:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.075 10:12:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:55.075 10:12:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.075 10:12:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:55.075 00:19:55.075 real 1m4.573s 00:19:55.075 user 4m4.712s 00:19:55.075 sys 0m10.739s 00:19:55.075 10:12:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:55.075 10:12:23 -- common/autotest_common.sh@10 -- # set +x 00:19:55.075 ************************************ 00:19:55.075 END TEST nvmf_initiator_timeout 00:19:55.075 ************************************ 00:19:55.075 10:12:23 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:19:55.075 10:12:23 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:19:55.075 10:12:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:55.075 10:12:23 -- common/autotest_common.sh@10 -- # set +x 00:19:55.075 10:12:24 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:19:55.075 10:12:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:55.075 10:12:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.075 10:12:24 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:19:55.075 10:12:24 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:55.075 10:12:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:55.075 10:12:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:55.075 10:12:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.075 ************************************ 00:19:55.075 START TEST nvmf_multicontroller 00:19:55.075 ************************************ 00:19:55.075 10:12:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:19:55.075 * Looking for test storage... 00:19:55.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:55.075 10:12:24 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:55.075 10:12:24 -- nvmf/common.sh@7 -- # uname -s 00:19:55.075 10:12:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:55.075 10:12:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:55.075 10:12:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:55.075 10:12:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:55.075 10:12:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:55.075 10:12:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:55.075 10:12:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:55.075 10:12:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:55.075 10:12:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:55.075 10:12:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:55.075 10:12:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:19:55.075 10:12:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:19:55.075 10:12:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:55.075 10:12:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:55.075 10:12:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:55.075 10:12:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:55.075 10:12:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:55.075 10:12:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:55.075 10:12:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:55.075 10:12:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.075 10:12:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.075 10:12:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.075 10:12:24 -- paths/export.sh@5 -- # export PATH 00:19:55.075 10:12:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:55.075 10:12:24 -- nvmf/common.sh@46 -- # : 0 00:19:55.075 10:12:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:55.075 10:12:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:55.075 10:12:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:55.075 10:12:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:55.075 10:12:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:55.075 10:12:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:55.075 10:12:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:55.075 10:12:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:55.075 10:12:24 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:55.075 10:12:24 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:55.075 10:12:24 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:19:55.075 10:12:24 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:19:55.075 10:12:24 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:55.075 10:12:24 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:19:55.075 10:12:24 -- host/multicontroller.sh@23 -- # nvmftestinit 00:19:55.075 10:12:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:55.075 10:12:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:55.075 10:12:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:55.075 10:12:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:55.075 10:12:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:55.075 10:12:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:55.075 10:12:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:55.075 10:12:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:55.075 10:12:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:55.075 10:12:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:55.075 10:12:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:55.075 10:12:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:55.075 10:12:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:55.075 10:12:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:55.075 10:12:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:55.075 10:12:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:55.075 10:12:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:55.075 10:12:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:55.075 10:12:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:55.075 10:12:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:55.075 10:12:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:55.075 10:12:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:55.075 10:12:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:55.075 10:12:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:55.075 10:12:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:55.075 10:12:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:55.075 10:12:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:55.075 10:12:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:55.075 Cannot find device "nvmf_tgt_br" 00:19:55.075 10:12:24 -- nvmf/common.sh@154 -- # true 00:19:55.075 10:12:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:55.075 Cannot find device "nvmf_tgt_br2" 00:19:55.075 10:12:24 -- nvmf/common.sh@155 -- # true 00:19:55.075 10:12:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:55.075 10:12:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:55.075 Cannot find device "nvmf_tgt_br" 00:19:55.075 10:12:24 -- nvmf/common.sh@157 -- # true 00:19:55.075 10:12:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:55.075 Cannot find device "nvmf_tgt_br2" 00:19:55.075 10:12:24 -- nvmf/common.sh@158 -- # true 00:19:55.075 10:12:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:55.075 10:12:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:55.075 10:12:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:55.075 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:55.075 10:12:24 -- nvmf/common.sh@161 -- # true 00:19:55.075 10:12:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:55.075 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:55.075 10:12:24 -- nvmf/common.sh@162 -- # true 00:19:55.075 10:12:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:55.075 10:12:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:55.075 10:12:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:55.076 10:12:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:55.076 10:12:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:55.076 10:12:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:55.076 10:12:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:55.076 10:12:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:55.076 10:12:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:55.076 10:12:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:55.076 10:12:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:55.076 10:12:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:55.076 10:12:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:55.076 10:12:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:55.076 10:12:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:55.076 10:12:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:55.076 10:12:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:55.076 10:12:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:55.076 10:12:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:55.076 10:12:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:55.076 10:12:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:55.076 10:12:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:55.076 10:12:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:55.076 10:12:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:55.076 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:55.076 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.102 ms 00:19:55.076 00:19:55.076 --- 10.0.0.2 ping statistics --- 00:19:55.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.076 rtt min/avg/max/mdev = 0.102/0.102/0.102/0.000 ms 00:19:55.076 10:12:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:55.076 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:55.076 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:19:55.076 00:19:55.076 --- 10.0.0.3 ping statistics --- 00:19:55.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.076 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:55.076 10:12:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:55.076 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:55.076 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:19:55.076 00:19:55.076 --- 10.0.0.1 ping statistics --- 00:19:55.076 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:55.076 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:19:55.076 10:12:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:55.076 10:12:24 -- nvmf/common.sh@421 -- # return 0 00:19:55.076 10:12:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:55.076 10:12:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:55.076 10:12:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:55.076 10:12:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:55.076 10:12:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:55.076 10:12:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:55.076 10:12:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:55.076 10:12:24 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:19:55.076 10:12:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:55.076 10:12:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:55.076 10:12:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.076 10:12:24 -- nvmf/common.sh@469 -- # nvmfpid=92057 00:19:55.076 10:12:24 -- nvmf/common.sh@470 -- # waitforlisten 92057 00:19:55.076 10:12:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:19:55.076 10:12:24 -- common/autotest_common.sh@819 -- # '[' -z 92057 ']' 00:19:55.076 10:12:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:55.076 10:12:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:55.076 10:12:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:55.076 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:55.076 10:12:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:55.076 10:12:24 -- common/autotest_common.sh@10 -- # set +x 00:19:55.076 [2024-07-14 10:12:24.491653] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:55.076 [2024-07-14 10:12:24.491733] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:55.076 [2024-07-14 10:12:24.631168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:55.076 [2024-07-14 10:12:24.691360] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:55.076 [2024-07-14 10:12:24.691779] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:55.076 [2024-07-14 10:12:24.691926] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:55.076 [2024-07-14 10:12:24.692068] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:55.076 [2024-07-14 10:12:24.692338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:55.076 [2024-07-14 10:12:24.692396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:55.076 [2024-07-14 10:12:24.692398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.076 10:12:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:55.076 10:12:25 -- common/autotest_common.sh@852 -- # return 0 00:19:55.076 10:12:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:55.076 10:12:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:55.076 10:12:25 -- common/autotest_common.sh@10 -- # set +x 00:19:55.076 10:12:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:55.076 10:12:25 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:55.076 10:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.076 10:12:25 -- common/autotest_common.sh@10 -- # set +x 00:19:55.076 [2024-07-14 10:12:25.502745] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:55.076 10:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.076 10:12:25 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:55.076 10:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.076 10:12:25 -- common/autotest_common.sh@10 -- # set +x 00:19:55.076 Malloc0 00:19:55.076 10:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.076 10:12:25 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:55.076 10:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.076 10:12:25 -- common/autotest_common.sh@10 -- # set +x 00:19:55.076 10:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.076 10:12:25 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:55.076 10:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.076 10:12:25 -- common/autotest_common.sh@10 -- # set +x 00:19:55.076 10:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.076 10:12:25 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:55.076 10:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.076 10:12:25 -- common/autotest_common.sh@10 -- # set +x 00:19:55.076 [2024-07-14 10:12:25.566288] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:55.076 10:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.076 10:12:25 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:19:55.076 10:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.076 10:12:25 -- common/autotest_common.sh@10 -- # set +x 00:19:55.076 [2024-07-14 10:12:25.574160] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:19:55.076 10:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.076 10:12:25 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:55.076 10:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.076 10:12:25 -- common/autotest_common.sh@10 -- # set +x 00:19:55.076 Malloc1 00:19:55.076 10:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.076 10:12:25 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:19:55.076 10:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.076 10:12:25 -- common/autotest_common.sh@10 -- # set +x 00:19:55.076 10:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.076 10:12:25 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:19:55.076 10:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.076 10:12:25 -- common/autotest_common.sh@10 -- # set +x 00:19:55.076 10:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.076 10:12:25 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:55.076 10:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.076 10:12:25 -- common/autotest_common.sh@10 -- # set +x 00:19:55.334 10:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.334 10:12:25 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:19:55.334 10:12:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:55.334 10:12:25 -- common/autotest_common.sh@10 -- # set +x 00:19:55.334 10:12:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:55.334 10:12:25 -- host/multicontroller.sh@44 -- # bdevperf_pid=92109 00:19:55.334 10:12:25 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:55.334 10:12:25 -- host/multicontroller.sh@47 -- # waitforlisten 92109 /var/tmp/bdevperf.sock 00:19:55.334 10:12:25 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:19:55.334 10:12:25 -- common/autotest_common.sh@819 -- # '[' -z 92109 ']' 00:19:55.334 10:12:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:55.334 10:12:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:55.334 10:12:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:55.334 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:55.334 10:12:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:55.334 10:12:25 -- common/autotest_common.sh@10 -- # set +x 00:19:56.268 10:12:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:56.268 10:12:26 -- common/autotest_common.sh@852 -- # return 0 00:19:56.268 10:12:26 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:56.268 10:12:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:56.268 10:12:26 -- common/autotest_common.sh@10 -- # set +x 00:19:56.268 NVMe0n1 00:19:56.268 10:12:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:56.268 10:12:26 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:56.268 10:12:26 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:19:56.268 10:12:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:56.268 10:12:26 -- common/autotest_common.sh@10 -- # set +x 00:19:56.268 10:12:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:56.268 1 00:19:56.268 10:12:26 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:56.268 10:12:26 -- common/autotest_common.sh@640 -- # local es=0 00:19:56.268 10:12:26 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:56.268 10:12:26 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:19:56.268 10:12:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:56.268 10:12:26 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:19:56.268 10:12:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:56.268 10:12:26 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:19:56.268 10:12:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:56.268 10:12:26 -- common/autotest_common.sh@10 -- # set +x 00:19:56.268 2024/07/14 10:12:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:56.268 request: 00:19:56.269 { 00:19:56.269 "method": "bdev_nvme_attach_controller", 00:19:56.269 "params": { 00:19:56.269 "name": "NVMe0", 00:19:56.269 "trtype": "tcp", 00:19:56.269 "traddr": "10.0.0.2", 00:19:56.269 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:19:56.269 "hostaddr": "10.0.0.2", 00:19:56.269 "hostsvcid": "60000", 00:19:56.269 "adrfam": "ipv4", 00:19:56.269 "trsvcid": "4420", 00:19:56.269 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:19:56.269 } 00:19:56.269 } 00:19:56.269 Got JSON-RPC error response 00:19:56.269 GoRPCClient: error on JSON-RPC call 00:19:56.269 10:12:26 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:56.269 10:12:26 -- common/autotest_common.sh@643 -- # es=1 00:19:56.269 10:12:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:56.269 10:12:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:56.269 10:12:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:56.269 10:12:26 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:56.269 10:12:26 -- common/autotest_common.sh@640 -- # local es=0 00:19:56.269 10:12:26 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:56.269 10:12:26 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:19:56.269 10:12:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:56.269 10:12:26 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:19:56.269 10:12:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:56.269 10:12:26 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:19:56.269 10:12:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:56.269 10:12:26 -- common/autotest_common.sh@10 -- # set +x 00:19:56.269 2024/07/14 10:12:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:56.269 request: 00:19:56.269 { 00:19:56.269 "method": "bdev_nvme_attach_controller", 00:19:56.269 "params": { 00:19:56.269 "name": "NVMe0", 00:19:56.269 "trtype": "tcp", 00:19:56.269 "traddr": "10.0.0.2", 00:19:56.269 "hostaddr": "10.0.0.2", 00:19:56.269 "hostsvcid": "60000", 00:19:56.269 "adrfam": "ipv4", 00:19:56.269 "trsvcid": "4420", 00:19:56.269 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:19:56.269 } 00:19:56.269 } 00:19:56.269 Got JSON-RPC error response 00:19:56.269 GoRPCClient: error on JSON-RPC call 00:19:56.269 10:12:26 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:56.269 10:12:26 -- common/autotest_common.sh@643 -- # es=1 00:19:56.269 10:12:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:56.269 10:12:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:56.269 10:12:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:56.269 10:12:26 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:56.269 10:12:26 -- common/autotest_common.sh@640 -- # local es=0 00:19:56.269 10:12:26 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:56.269 10:12:26 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:19:56.269 10:12:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:56.269 10:12:26 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:19:56.269 10:12:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:56.269 10:12:26 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:19:56.269 10:12:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:56.269 10:12:26 -- common/autotest_common.sh@10 -- # set +x 00:19:56.269 2024/07/14 10:12:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:19:56.269 request: 00:19:56.269 { 00:19:56.269 "method": "bdev_nvme_attach_controller", 00:19:56.269 "params": { 00:19:56.269 "name": "NVMe0", 00:19:56.269 "trtype": "tcp", 00:19:56.269 "traddr": "10.0.0.2", 00:19:56.269 "hostaddr": "10.0.0.2", 00:19:56.269 "hostsvcid": "60000", 00:19:56.269 "adrfam": "ipv4", 00:19:56.269 "trsvcid": "4420", 00:19:56.269 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.269 "multipath": "disable" 00:19:56.269 } 00:19:56.269 } 00:19:56.269 Got JSON-RPC error response 00:19:56.269 GoRPCClient: error on JSON-RPC call 00:19:56.269 10:12:26 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:56.269 10:12:26 -- common/autotest_common.sh@643 -- # es=1 00:19:56.269 10:12:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:56.269 10:12:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:56.269 10:12:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:56.269 10:12:26 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:56.269 10:12:26 -- common/autotest_common.sh@640 -- # local es=0 00:19:56.269 10:12:26 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:56.269 10:12:26 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:19:56.269 10:12:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:56.269 10:12:26 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:19:56.269 10:12:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:19:56.269 10:12:26 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:19:56.269 10:12:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:56.269 10:12:26 -- common/autotest_common.sh@10 -- # set +x 00:19:56.269 2024/07/14 10:12:26 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:19:56.269 request: 00:19:56.269 { 00:19:56.269 "method": "bdev_nvme_attach_controller", 00:19:56.269 "params": { 00:19:56.269 "name": "NVMe0", 00:19:56.269 "trtype": "tcp", 00:19:56.269 "traddr": "10.0.0.2", 00:19:56.269 "hostaddr": "10.0.0.2", 00:19:56.269 "hostsvcid": "60000", 00:19:56.269 "adrfam": "ipv4", 00:19:56.269 "trsvcid": "4420", 00:19:56.269 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.269 "multipath": "failover" 00:19:56.269 } 00:19:56.269 } 00:19:56.269 Got JSON-RPC error response 00:19:56.269 GoRPCClient: error on JSON-RPC call 00:19:56.269 10:12:26 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:19:56.269 10:12:26 -- common/autotest_common.sh@643 -- # es=1 00:19:56.269 10:12:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:19:56.269 10:12:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:19:56.269 10:12:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:19:56.269 10:12:26 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:56.269 10:12:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:56.269 10:12:26 -- common/autotest_common.sh@10 -- # set +x 00:19:56.527 00:19:56.527 10:12:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:56.527 10:12:26 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:56.527 10:12:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:56.527 10:12:26 -- common/autotest_common.sh@10 -- # set +x 00:19:56.527 10:12:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:56.527 10:12:26 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:19:56.527 10:12:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:56.527 10:12:26 -- common/autotest_common.sh@10 -- # set +x 00:19:56.527 00:19:56.527 10:12:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:56.527 10:12:26 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:56.527 10:12:26 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:19:56.527 10:12:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:56.527 10:12:26 -- common/autotest_common.sh@10 -- # set +x 00:19:56.527 10:12:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:56.527 10:12:26 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:19:56.527 10:12:26 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:57.902 0 00:19:57.902 10:12:28 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:19:57.902 10:12:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:57.902 10:12:28 -- common/autotest_common.sh@10 -- # set +x 00:19:57.902 10:12:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:57.902 10:12:28 -- host/multicontroller.sh@100 -- # killprocess 92109 00:19:57.902 10:12:28 -- common/autotest_common.sh@926 -- # '[' -z 92109 ']' 00:19:57.902 10:12:28 -- common/autotest_common.sh@930 -- # kill -0 92109 00:19:57.902 10:12:28 -- common/autotest_common.sh@931 -- # uname 00:19:57.902 10:12:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:57.902 10:12:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92109 00:19:57.902 10:12:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:57.902 10:12:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:57.902 killing process with pid 92109 00:19:57.902 10:12:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92109' 00:19:57.902 10:12:28 -- common/autotest_common.sh@945 -- # kill 92109 00:19:57.902 10:12:28 -- common/autotest_common.sh@950 -- # wait 92109 00:19:57.902 10:12:28 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:57.902 10:12:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:57.902 10:12:28 -- common/autotest_common.sh@10 -- # set +x 00:19:57.902 10:12:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:57.902 10:12:28 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:57.902 10:12:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:57.902 10:12:28 -- common/autotest_common.sh@10 -- # set +x 00:19:57.902 10:12:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:57.902 10:12:28 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:19:57.902 10:12:28 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:57.902 10:12:28 -- common/autotest_common.sh@1597 -- # read -r file 00:19:57.903 10:12:28 -- common/autotest_common.sh@1596 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:19:57.903 10:12:28 -- common/autotest_common.sh@1596 -- # sort -u 00:19:57.903 10:12:28 -- common/autotest_common.sh@1598 -- # cat 00:19:57.903 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:19:57.903 [2024-07-14 10:12:25.687556] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:57.903 [2024-07-14 10:12:25.687758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92109 ] 00:19:57.903 [2024-07-14 10:12:25.830689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.903 [2024-07-14 10:12:25.901943] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.903 [2024-07-14 10:12:26.900774] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name a8e00e94-f458-4937-a76a-0ca4e9f7918e already exists 00:19:57.903 [2024-07-14 10:12:26.900816] bdev.c:7603:bdev_register: *ERROR*: Unable to add uuid:a8e00e94-f458-4937-a76a-0ca4e9f7918e alias for bdev NVMe1n1 00:19:57.903 [2024-07-14 10:12:26.900835] bdev_nvme.c:4236:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:19:57.903 Running I/O for 1 seconds... 00:19:57.903 00:19:57.903 Latency(us) 00:19:57.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.903 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:19:57.903 NVMe0n1 : 1.00 22673.08 88.57 0.00 0.00 5631.53 2993.80 9830.40 00:19:57.903 =================================================================================================================== 00:19:57.903 Total : 22673.08 88.57 0.00 0.00 5631.53 2993.80 9830.40 00:19:57.903 Received shutdown signal, test time was about 1.000000 seconds 00:19:57.903 00:19:57.903 Latency(us) 00:19:57.903 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.903 =================================================================================================================== 00:19:57.903 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:57.903 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:19:57.903 10:12:28 -- common/autotest_common.sh@1603 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:19:57.903 10:12:28 -- common/autotest_common.sh@1597 -- # read -r file 00:19:57.903 10:12:28 -- host/multicontroller.sh@108 -- # nvmftestfini 00:19:57.903 10:12:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:57.903 10:12:28 -- nvmf/common.sh@116 -- # sync 00:19:57.903 10:12:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:57.903 10:12:28 -- nvmf/common.sh@119 -- # set +e 00:19:57.903 10:12:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:57.903 10:12:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:57.903 rmmod nvme_tcp 00:19:57.903 rmmod nvme_fabrics 00:19:57.903 rmmod nvme_keyring 00:19:57.903 10:12:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:57.903 10:12:28 -- nvmf/common.sh@123 -- # set -e 00:19:57.903 10:12:28 -- nvmf/common.sh@124 -- # return 0 00:19:57.903 10:12:28 -- nvmf/common.sh@477 -- # '[' -n 92057 ']' 00:19:57.903 10:12:28 -- nvmf/common.sh@478 -- # killprocess 92057 00:19:57.903 10:12:28 -- common/autotest_common.sh@926 -- # '[' -z 92057 ']' 00:19:57.903 10:12:28 -- common/autotest_common.sh@930 -- # kill -0 92057 00:19:57.903 10:12:28 -- common/autotest_common.sh@931 -- # uname 00:19:57.903 10:12:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:57.903 10:12:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92057 00:19:57.903 10:12:28 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:19:57.903 10:12:28 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:19:57.903 killing process with pid 92057 00:19:57.903 10:12:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92057' 00:19:57.903 10:12:28 -- common/autotest_common.sh@945 -- # kill 92057 00:19:57.903 10:12:28 -- common/autotest_common.sh@950 -- # wait 92057 00:19:58.161 10:12:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:58.161 10:12:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:58.161 10:12:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:58.161 10:12:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:58.161 10:12:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:58.161 10:12:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.161 10:12:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:58.161 10:12:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.161 10:12:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:58.418 00:19:58.418 real 0m4.684s 00:19:58.418 user 0m14.882s 00:19:58.418 sys 0m1.042s 00:19:58.418 10:12:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:58.418 ************************************ 00:19:58.418 10:12:28 -- common/autotest_common.sh@10 -- # set +x 00:19:58.418 END TEST nvmf_multicontroller 00:19:58.418 ************************************ 00:19:58.418 10:12:28 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:58.418 10:12:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:19:58.418 10:12:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:58.418 10:12:28 -- common/autotest_common.sh@10 -- # set +x 00:19:58.418 ************************************ 00:19:58.418 START TEST nvmf_aer 00:19:58.418 ************************************ 00:19:58.418 10:12:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:19:58.418 * Looking for test storage... 00:19:58.418 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:19:58.419 10:12:28 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:58.419 10:12:28 -- nvmf/common.sh@7 -- # uname -s 00:19:58.419 10:12:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:58.419 10:12:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:58.419 10:12:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:58.419 10:12:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:58.419 10:12:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:58.419 10:12:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:58.419 10:12:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:58.419 10:12:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:58.419 10:12:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:58.419 10:12:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:58.419 10:12:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:19:58.419 10:12:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:19:58.419 10:12:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:58.419 10:12:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:58.419 10:12:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:58.419 10:12:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:58.419 10:12:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:58.419 10:12:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:58.419 10:12:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:58.419 10:12:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.419 10:12:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.419 10:12:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.419 10:12:28 -- paths/export.sh@5 -- # export PATH 00:19:58.419 10:12:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:58.419 10:12:28 -- nvmf/common.sh@46 -- # : 0 00:19:58.419 10:12:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:58.419 10:12:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:58.419 10:12:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:58.419 10:12:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:58.419 10:12:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:58.419 10:12:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:58.419 10:12:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:58.419 10:12:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:58.419 10:12:28 -- host/aer.sh@11 -- # nvmftestinit 00:19:58.419 10:12:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:58.419 10:12:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:58.419 10:12:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:58.419 10:12:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:58.419 10:12:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:58.419 10:12:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:58.419 10:12:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:58.419 10:12:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:58.419 10:12:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:58.419 10:12:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:58.419 10:12:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:58.419 10:12:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:58.419 10:12:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:58.419 10:12:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:58.419 10:12:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:58.419 10:12:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:58.419 10:12:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:58.419 10:12:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:58.419 10:12:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:58.419 10:12:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:58.419 10:12:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:58.419 10:12:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:58.419 10:12:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:58.419 10:12:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:58.419 10:12:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:58.419 10:12:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:58.419 10:12:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:58.419 10:12:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:58.419 Cannot find device "nvmf_tgt_br" 00:19:58.419 10:12:28 -- nvmf/common.sh@154 -- # true 00:19:58.419 10:12:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:58.419 Cannot find device "nvmf_tgt_br2" 00:19:58.419 10:12:28 -- nvmf/common.sh@155 -- # true 00:19:58.419 10:12:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:58.419 10:12:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:58.419 Cannot find device "nvmf_tgt_br" 00:19:58.419 10:12:28 -- nvmf/common.sh@157 -- # true 00:19:58.419 10:12:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:58.419 Cannot find device "nvmf_tgt_br2" 00:19:58.419 10:12:28 -- nvmf/common.sh@158 -- # true 00:19:58.419 10:12:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:58.419 10:12:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:58.419 10:12:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:58.419 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:58.419 10:12:28 -- nvmf/common.sh@161 -- # true 00:19:58.419 10:12:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:58.419 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:58.419 10:12:28 -- nvmf/common.sh@162 -- # true 00:19:58.419 10:12:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:58.677 10:12:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:58.677 10:12:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:58.677 10:12:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:58.677 10:12:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:58.677 10:12:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:58.677 10:12:29 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:58.677 10:12:29 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:58.677 10:12:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:58.677 10:12:29 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:58.677 10:12:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:58.677 10:12:29 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:58.677 10:12:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:58.677 10:12:29 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:58.677 10:12:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:58.677 10:12:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:58.677 10:12:29 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:58.677 10:12:29 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:58.677 10:12:29 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:58.677 10:12:29 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:58.677 10:12:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:58.677 10:12:29 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:58.677 10:12:29 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:58.677 10:12:29 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:58.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:58.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:19:58.677 00:19:58.677 --- 10.0.0.2 ping statistics --- 00:19:58.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.677 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:19:58.677 10:12:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:58.677 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:58.677 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:19:58.677 00:19:58.677 --- 10.0.0.3 ping statistics --- 00:19:58.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.677 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:19:58.677 10:12:29 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:58.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:58.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:19:58.677 00:19:58.677 --- 10.0.0.1 ping statistics --- 00:19:58.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:58.677 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:19:58.677 10:12:29 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:58.677 10:12:29 -- nvmf/common.sh@421 -- # return 0 00:19:58.677 10:12:29 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:58.677 10:12:29 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:58.677 10:12:29 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:58.677 10:12:29 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:58.678 10:12:29 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:58.678 10:12:29 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:58.678 10:12:29 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:58.678 10:12:29 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:19:58.678 10:12:29 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:58.678 10:12:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:58.678 10:12:29 -- common/autotest_common.sh@10 -- # set +x 00:19:58.678 10:12:29 -- nvmf/common.sh@469 -- # nvmfpid=92360 00:19:58.678 10:12:29 -- nvmf/common.sh@470 -- # waitforlisten 92360 00:19:58.678 10:12:29 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:58.678 10:12:29 -- common/autotest_common.sh@819 -- # '[' -z 92360 ']' 00:19:58.678 10:12:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.678 10:12:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:58.678 10:12:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.678 10:12:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:58.678 10:12:29 -- common/autotest_common.sh@10 -- # set +x 00:19:58.678 [2024-07-14 10:12:29.210662] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:19:58.678 [2024-07-14 10:12:29.210757] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.936 [2024-07-14 10:12:29.353312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:58.936 [2024-07-14 10:12:29.408402] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:58.936 [2024-07-14 10:12:29.408529] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.936 [2024-07-14 10:12:29.408540] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.936 [2024-07-14 10:12:29.408547] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.936 [2024-07-14 10:12:29.408666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.936 [2024-07-14 10:12:29.409139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.936 [2024-07-14 10:12:29.409388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.936 [2024-07-14 10:12:29.409783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:59.871 10:12:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:59.871 10:12:30 -- common/autotest_common.sh@852 -- # return 0 00:19:59.871 10:12:30 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:59.871 10:12:30 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:59.871 10:12:30 -- common/autotest_common.sh@10 -- # set +x 00:19:59.871 10:12:30 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:59.871 10:12:30 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:59.871 10:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:59.871 10:12:30 -- common/autotest_common.sh@10 -- # set +x 00:19:59.871 [2024-07-14 10:12:30.159132] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:59.871 10:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:59.871 10:12:30 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:19:59.871 10:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:59.871 10:12:30 -- common/autotest_common.sh@10 -- # set +x 00:19:59.871 Malloc0 00:19:59.871 10:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:59.871 10:12:30 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:19:59.871 10:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:59.871 10:12:30 -- common/autotest_common.sh@10 -- # set +x 00:19:59.871 10:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:59.871 10:12:30 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:59.871 10:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:59.871 10:12:30 -- common/autotest_common.sh@10 -- # set +x 00:19:59.871 10:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:59.871 10:12:30 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:59.871 10:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:59.871 10:12:30 -- common/autotest_common.sh@10 -- # set +x 00:19:59.871 [2024-07-14 10:12:30.232911] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:59.871 10:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:59.871 10:12:30 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:19:59.871 10:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:59.871 10:12:30 -- common/autotest_common.sh@10 -- # set +x 00:19:59.871 [2024-07-14 10:12:30.240680] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:19:59.871 [ 00:19:59.871 { 00:19:59.871 "allow_any_host": true, 00:19:59.871 "hosts": [], 00:19:59.871 "listen_addresses": [], 00:19:59.871 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:59.871 "subtype": "Discovery" 00:19:59.871 }, 00:19:59.871 { 00:19:59.871 "allow_any_host": true, 00:19:59.871 "hosts": [], 00:19:59.871 "listen_addresses": [ 00:19:59.871 { 00:19:59.871 "adrfam": "IPv4", 00:19:59.871 "traddr": "10.0.0.2", 00:19:59.871 "transport": "TCP", 00:19:59.871 "trsvcid": "4420", 00:19:59.871 "trtype": "TCP" 00:19:59.871 } 00:19:59.871 ], 00:19:59.871 "max_cntlid": 65519, 00:19:59.871 "max_namespaces": 2, 00:19:59.871 "min_cntlid": 1, 00:19:59.871 "model_number": "SPDK bdev Controller", 00:19:59.871 "namespaces": [ 00:19:59.871 { 00:19:59.871 "bdev_name": "Malloc0", 00:19:59.871 "name": "Malloc0", 00:19:59.871 "nguid": "E9190C3A0ABC4444902DB2266C487E76", 00:19:59.871 "nsid": 1, 00:19:59.871 "uuid": "e9190c3a-0abc-4444-902d-b2266c487e76" 00:19:59.871 } 00:19:59.871 ], 00:19:59.871 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:59.871 "serial_number": "SPDK00000000000001", 00:19:59.871 "subtype": "NVMe" 00:19:59.871 } 00:19:59.871 ] 00:19:59.871 10:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:59.871 10:12:30 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:59.871 10:12:30 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:19:59.871 10:12:30 -- host/aer.sh@33 -- # aerpid=92414 00:19:59.871 10:12:30 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:19:59.871 10:12:30 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:19:59.871 10:12:30 -- common/autotest_common.sh@1244 -- # local i=0 00:19:59.871 10:12:30 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:59.871 10:12:30 -- common/autotest_common.sh@1246 -- # '[' 0 -lt 200 ']' 00:19:59.871 10:12:30 -- common/autotest_common.sh@1247 -- # i=1 00:19:59.871 10:12:30 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:19:59.871 10:12:30 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:59.871 10:12:30 -- common/autotest_common.sh@1246 -- # '[' 1 -lt 200 ']' 00:19:59.871 10:12:30 -- common/autotest_common.sh@1247 -- # i=2 00:19:59.871 10:12:30 -- common/autotest_common.sh@1248 -- # sleep 0.1 00:20:00.130 10:12:30 -- common/autotest_common.sh@1245 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:00.130 10:12:30 -- common/autotest_common.sh@1251 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:00.130 10:12:30 -- common/autotest_common.sh@1255 -- # return 0 00:20:00.130 10:12:30 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:00.130 10:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.130 10:12:30 -- common/autotest_common.sh@10 -- # set +x 00:20:00.130 Malloc1 00:20:00.130 10:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.130 10:12:30 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:00.130 10:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.130 10:12:30 -- common/autotest_common.sh@10 -- # set +x 00:20:00.130 10:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.130 10:12:30 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:00.130 10:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.130 10:12:30 -- common/autotest_common.sh@10 -- # set +x 00:20:00.130 [ 00:20:00.130 { 00:20:00.130 "allow_any_host": true, 00:20:00.130 "hosts": [], 00:20:00.130 "listen_addresses": [], 00:20:00.130 Asynchronous Event Request test 00:20:00.130 Attaching to 10.0.0.2 00:20:00.130 Attached to 10.0.0.2 00:20:00.130 Registering asynchronous event callbacks... 00:20:00.130 Starting namespace attribute notice tests for all controllers... 00:20:00.130 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:00.130 aer_cb - Changed Namespace 00:20:00.130 Cleaning up... 00:20:00.130 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:00.130 "subtype": "Discovery" 00:20:00.130 }, 00:20:00.130 { 00:20:00.130 "allow_any_host": true, 00:20:00.130 "hosts": [], 00:20:00.130 "listen_addresses": [ 00:20:00.130 { 00:20:00.130 "adrfam": "IPv4", 00:20:00.130 "traddr": "10.0.0.2", 00:20:00.130 "transport": "TCP", 00:20:00.130 "trsvcid": "4420", 00:20:00.130 "trtype": "TCP" 00:20:00.130 } 00:20:00.130 ], 00:20:00.130 "max_cntlid": 65519, 00:20:00.130 "max_namespaces": 2, 00:20:00.130 "min_cntlid": 1, 00:20:00.130 "model_number": "SPDK bdev Controller", 00:20:00.130 "namespaces": [ 00:20:00.130 { 00:20:00.130 "bdev_name": "Malloc0", 00:20:00.130 "name": "Malloc0", 00:20:00.130 "nguid": "E9190C3A0ABC4444902DB2266C487E76", 00:20:00.130 "nsid": 1, 00:20:00.130 "uuid": "e9190c3a-0abc-4444-902d-b2266c487e76" 00:20:00.130 }, 00:20:00.130 { 00:20:00.130 "bdev_name": "Malloc1", 00:20:00.130 "name": "Malloc1", 00:20:00.130 "nguid": "95B53CE303A54B2CBBA985377E3651F3", 00:20:00.130 "nsid": 2, 00:20:00.130 "uuid": "95b53ce3-03a5-4b2c-bba9-85377e3651f3" 00:20:00.130 } 00:20:00.130 ], 00:20:00.130 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:00.130 "serial_number": "SPDK00000000000001", 00:20:00.130 "subtype": "NVMe" 00:20:00.130 } 00:20:00.130 ] 00:20:00.130 10:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.130 10:12:30 -- host/aer.sh@43 -- # wait 92414 00:20:00.130 10:12:30 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:00.130 10:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.130 10:12:30 -- common/autotest_common.sh@10 -- # set +x 00:20:00.130 10:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.130 10:12:30 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:00.130 10:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.130 10:12:30 -- common/autotest_common.sh@10 -- # set +x 00:20:00.130 10:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.130 10:12:30 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:00.130 10:12:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:00.130 10:12:30 -- common/autotest_common.sh@10 -- # set +x 00:20:00.130 10:12:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:00.130 10:12:30 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:00.130 10:12:30 -- host/aer.sh@51 -- # nvmftestfini 00:20:00.130 10:12:30 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:00.130 10:12:30 -- nvmf/common.sh@116 -- # sync 00:20:00.130 10:12:30 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:00.130 10:12:30 -- nvmf/common.sh@119 -- # set +e 00:20:00.130 10:12:30 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:00.130 10:12:30 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:00.130 rmmod nvme_tcp 00:20:00.389 rmmod nvme_fabrics 00:20:00.389 rmmod nvme_keyring 00:20:00.389 10:12:30 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:00.389 10:12:30 -- nvmf/common.sh@123 -- # set -e 00:20:00.389 10:12:30 -- nvmf/common.sh@124 -- # return 0 00:20:00.389 10:12:30 -- nvmf/common.sh@477 -- # '[' -n 92360 ']' 00:20:00.389 10:12:30 -- nvmf/common.sh@478 -- # killprocess 92360 00:20:00.389 10:12:30 -- common/autotest_common.sh@926 -- # '[' -z 92360 ']' 00:20:00.389 10:12:30 -- common/autotest_common.sh@930 -- # kill -0 92360 00:20:00.389 10:12:30 -- common/autotest_common.sh@931 -- # uname 00:20:00.389 10:12:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:00.389 10:12:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92360 00:20:00.389 10:12:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:00.389 10:12:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:00.389 killing process with pid 92360 00:20:00.389 10:12:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92360' 00:20:00.389 10:12:30 -- common/autotest_common.sh@945 -- # kill 92360 00:20:00.389 [2024-07-14 10:12:30.738123] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:00.389 10:12:30 -- common/autotest_common.sh@950 -- # wait 92360 00:20:00.389 10:12:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:00.389 10:12:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:00.389 10:12:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:00.389 10:12:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:00.390 10:12:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:00.390 10:12:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.390 10:12:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:00.390 10:12:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.649 10:12:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:00.649 00:20:00.649 real 0m2.215s 00:20:00.649 user 0m6.203s 00:20:00.649 sys 0m0.624s 00:20:00.649 10:12:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:00.649 10:12:30 -- common/autotest_common.sh@10 -- # set +x 00:20:00.649 ************************************ 00:20:00.649 END TEST nvmf_aer 00:20:00.649 ************************************ 00:20:00.649 10:12:31 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:00.649 10:12:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:00.649 10:12:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:00.649 10:12:31 -- common/autotest_common.sh@10 -- # set +x 00:20:00.649 ************************************ 00:20:00.649 START TEST nvmf_async_init 00:20:00.649 ************************************ 00:20:00.649 10:12:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:00.649 * Looking for test storage... 00:20:00.649 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:00.649 10:12:31 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:00.649 10:12:31 -- nvmf/common.sh@7 -- # uname -s 00:20:00.649 10:12:31 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:00.649 10:12:31 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:00.649 10:12:31 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:00.649 10:12:31 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:00.649 10:12:31 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:00.649 10:12:31 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:00.649 10:12:31 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:00.649 10:12:31 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:00.649 10:12:31 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:00.649 10:12:31 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:00.649 10:12:31 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:20:00.649 10:12:31 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:20:00.649 10:12:31 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:00.649 10:12:31 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:00.649 10:12:31 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:00.649 10:12:31 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:00.649 10:12:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:00.649 10:12:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:00.649 10:12:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:00.649 10:12:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.649 10:12:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.649 10:12:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.649 10:12:31 -- paths/export.sh@5 -- # export PATH 00:20:00.649 10:12:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:00.649 10:12:31 -- nvmf/common.sh@46 -- # : 0 00:20:00.649 10:12:31 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:00.649 10:12:31 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:00.649 10:12:31 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:00.649 10:12:31 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:00.649 10:12:31 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:00.649 10:12:31 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:00.649 10:12:31 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:00.649 10:12:31 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:00.649 10:12:31 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:00.649 10:12:31 -- host/async_init.sh@14 -- # null_block_size=512 00:20:00.649 10:12:31 -- host/async_init.sh@15 -- # null_bdev=null0 00:20:00.649 10:12:31 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:00.649 10:12:31 -- host/async_init.sh@20 -- # uuidgen 00:20:00.649 10:12:31 -- host/async_init.sh@20 -- # tr -d - 00:20:00.649 10:12:31 -- host/async_init.sh@20 -- # nguid=797ec045bd9c4f08960efedcde961473 00:20:00.649 10:12:31 -- host/async_init.sh@22 -- # nvmftestinit 00:20:00.649 10:12:31 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:00.649 10:12:31 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:00.649 10:12:31 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:00.649 10:12:31 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:00.649 10:12:31 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:00.649 10:12:31 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:00.649 10:12:31 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:00.649 10:12:31 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:00.649 10:12:31 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:00.649 10:12:31 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:00.649 10:12:31 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:00.649 10:12:31 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:00.649 10:12:31 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:00.649 10:12:31 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:00.649 10:12:31 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:00.649 10:12:31 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:00.649 10:12:31 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:00.649 10:12:31 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:00.649 10:12:31 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:00.649 10:12:31 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:00.649 10:12:31 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:00.649 10:12:31 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:00.649 10:12:31 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:00.649 10:12:31 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:00.649 10:12:31 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:00.649 10:12:31 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:00.649 10:12:31 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:00.649 10:12:31 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:00.649 Cannot find device "nvmf_tgt_br" 00:20:00.649 10:12:31 -- nvmf/common.sh@154 -- # true 00:20:00.649 10:12:31 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:00.649 Cannot find device "nvmf_tgt_br2" 00:20:00.649 10:12:31 -- nvmf/common.sh@155 -- # true 00:20:00.649 10:12:31 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:00.649 10:12:31 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:00.649 Cannot find device "nvmf_tgt_br" 00:20:00.649 10:12:31 -- nvmf/common.sh@157 -- # true 00:20:00.649 10:12:31 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:00.649 Cannot find device "nvmf_tgt_br2" 00:20:00.649 10:12:31 -- nvmf/common.sh@158 -- # true 00:20:00.649 10:12:31 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:00.909 10:12:31 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:00.909 10:12:31 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:00.909 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:00.909 10:12:31 -- nvmf/common.sh@161 -- # true 00:20:00.909 10:12:31 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:00.909 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:00.909 10:12:31 -- nvmf/common.sh@162 -- # true 00:20:00.909 10:12:31 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:00.909 10:12:31 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:00.909 10:12:31 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:00.909 10:12:31 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:00.909 10:12:31 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:00.909 10:12:31 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:00.909 10:12:31 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:00.909 10:12:31 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:00.909 10:12:31 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:00.909 10:12:31 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:00.909 10:12:31 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:00.909 10:12:31 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:00.909 10:12:31 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:00.909 10:12:31 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:00.909 10:12:31 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:00.909 10:12:31 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:00.909 10:12:31 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:00.909 10:12:31 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:00.909 10:12:31 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:00.909 10:12:31 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:00.909 10:12:31 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:00.909 10:12:31 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:00.909 10:12:31 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:01.167 10:12:31 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:01.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:01.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.091 ms 00:20:01.167 00:20:01.167 --- 10.0.0.2 ping statistics --- 00:20:01.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.167 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:20:01.167 10:12:31 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:01.167 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:01.167 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.030 ms 00:20:01.167 00:20:01.167 --- 10.0.0.3 ping statistics --- 00:20:01.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.167 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:20:01.167 10:12:31 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:01.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:01.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:20:01.167 00:20:01.167 --- 10.0.0.1 ping statistics --- 00:20:01.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:01.167 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:20:01.167 10:12:31 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:01.167 10:12:31 -- nvmf/common.sh@421 -- # return 0 00:20:01.167 10:12:31 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:01.167 10:12:31 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:01.167 10:12:31 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:01.167 10:12:31 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:01.167 10:12:31 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:01.167 10:12:31 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:01.167 10:12:31 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:01.167 10:12:31 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:01.167 10:12:31 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:01.167 10:12:31 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:01.167 10:12:31 -- common/autotest_common.sh@10 -- # set +x 00:20:01.167 10:12:31 -- nvmf/common.sh@469 -- # nvmfpid=92583 00:20:01.167 10:12:31 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:01.167 10:12:31 -- nvmf/common.sh@470 -- # waitforlisten 92583 00:20:01.167 10:12:31 -- common/autotest_common.sh@819 -- # '[' -z 92583 ']' 00:20:01.167 10:12:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:01.167 10:12:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:01.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:01.167 10:12:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:01.167 10:12:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:01.167 10:12:31 -- common/autotest_common.sh@10 -- # set +x 00:20:01.167 [2024-07-14 10:12:31.537816] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:01.167 [2024-07-14 10:12:31.537897] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.167 [2024-07-14 10:12:31.672228] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:01.426 [2024-07-14 10:12:31.737111] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:01.426 [2024-07-14 10:12:31.737266] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:01.426 [2024-07-14 10:12:31.737280] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:01.426 [2024-07-14 10:12:31.737289] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:01.426 [2024-07-14 10:12:31.737316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.993 10:12:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:01.993 10:12:32 -- common/autotest_common.sh@852 -- # return 0 00:20:01.993 10:12:32 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:01.993 10:12:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:01.993 10:12:32 -- common/autotest_common.sh@10 -- # set +x 00:20:02.252 10:12:32 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:02.252 10:12:32 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:02.252 10:12:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:02.252 10:12:32 -- common/autotest_common.sh@10 -- # set +x 00:20:02.252 [2024-07-14 10:12:32.588167] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:02.252 10:12:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:02.252 10:12:32 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:02.252 10:12:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:02.252 10:12:32 -- common/autotest_common.sh@10 -- # set +x 00:20:02.252 null0 00:20:02.252 10:12:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:02.252 10:12:32 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:02.252 10:12:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:02.252 10:12:32 -- common/autotest_common.sh@10 -- # set +x 00:20:02.252 10:12:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:02.252 10:12:32 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:02.252 10:12:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:02.252 10:12:32 -- common/autotest_common.sh@10 -- # set +x 00:20:02.252 10:12:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:02.252 10:12:32 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 797ec045bd9c4f08960efedcde961473 00:20:02.252 10:12:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:02.252 10:12:32 -- common/autotest_common.sh@10 -- # set +x 00:20:02.252 10:12:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:02.252 10:12:32 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:02.252 10:12:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:02.252 10:12:32 -- common/autotest_common.sh@10 -- # set +x 00:20:02.252 [2024-07-14 10:12:32.628270] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:02.252 10:12:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:02.252 10:12:32 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:02.252 10:12:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:02.252 10:12:32 -- common/autotest_common.sh@10 -- # set +x 00:20:02.511 nvme0n1 00:20:02.511 10:12:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:02.511 10:12:32 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:02.511 10:12:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:02.511 10:12:32 -- common/autotest_common.sh@10 -- # set +x 00:20:02.511 [ 00:20:02.512 { 00:20:02.512 "aliases": [ 00:20:02.512 "797ec045-bd9c-4f08-960e-fedcde961473" 00:20:02.512 ], 00:20:02.512 "assigned_rate_limits": { 00:20:02.512 "r_mbytes_per_sec": 0, 00:20:02.512 "rw_ios_per_sec": 0, 00:20:02.512 "rw_mbytes_per_sec": 0, 00:20:02.512 "w_mbytes_per_sec": 0 00:20:02.512 }, 00:20:02.512 "block_size": 512, 00:20:02.512 "claimed": false, 00:20:02.512 "driver_specific": { 00:20:02.512 "mp_policy": "active_passive", 00:20:02.512 "nvme": [ 00:20:02.512 { 00:20:02.512 "ctrlr_data": { 00:20:02.512 "ana_reporting": false, 00:20:02.512 "cntlid": 1, 00:20:02.512 "firmware_revision": "24.01.1", 00:20:02.512 "model_number": "SPDK bdev Controller", 00:20:02.512 "multi_ctrlr": true, 00:20:02.512 "oacs": { 00:20:02.512 "firmware": 0, 00:20:02.512 "format": 0, 00:20:02.512 "ns_manage": 0, 00:20:02.512 "security": 0 00:20:02.512 }, 00:20:02.512 "serial_number": "00000000000000000000", 00:20:02.512 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:02.512 "vendor_id": "0x8086" 00:20:02.512 }, 00:20:02.512 "ns_data": { 00:20:02.512 "can_share": true, 00:20:02.512 "id": 1 00:20:02.512 }, 00:20:02.512 "trid": { 00:20:02.512 "adrfam": "IPv4", 00:20:02.512 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:02.512 "traddr": "10.0.0.2", 00:20:02.512 "trsvcid": "4420", 00:20:02.512 "trtype": "TCP" 00:20:02.512 }, 00:20:02.512 "vs": { 00:20:02.512 "nvme_version": "1.3" 00:20:02.512 } 00:20:02.512 } 00:20:02.512 ] 00:20:02.512 }, 00:20:02.512 "name": "nvme0n1", 00:20:02.512 "num_blocks": 2097152, 00:20:02.512 "product_name": "NVMe disk", 00:20:02.512 "supported_io_types": { 00:20:02.512 "abort": true, 00:20:02.512 "compare": true, 00:20:02.512 "compare_and_write": true, 00:20:02.512 "flush": true, 00:20:02.512 "nvme_admin": true, 00:20:02.512 "nvme_io": true, 00:20:02.512 "read": true, 00:20:02.512 "reset": true, 00:20:02.512 "unmap": false, 00:20:02.512 "write": true, 00:20:02.512 "write_zeroes": true 00:20:02.512 }, 00:20:02.512 "uuid": "797ec045-bd9c-4f08-960e-fedcde961473", 00:20:02.512 "zoned": false 00:20:02.512 } 00:20:02.512 ] 00:20:02.512 10:12:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:02.512 10:12:32 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:02.512 10:12:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:02.512 10:12:32 -- common/autotest_common.sh@10 -- # set +x 00:20:02.512 [2024-07-14 10:12:32.884242] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:02.512 [2024-07-14 10:12:32.884321] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x160fff0 (9): Bad file descriptor 00:20:02.512 [2024-07-14 10:12:33.016330] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:02.512 10:12:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:02.512 10:12:33 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:02.512 10:12:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:02.512 10:12:33 -- common/autotest_common.sh@10 -- # set +x 00:20:02.512 [ 00:20:02.512 { 00:20:02.512 "aliases": [ 00:20:02.512 "797ec045-bd9c-4f08-960e-fedcde961473" 00:20:02.512 ], 00:20:02.512 "assigned_rate_limits": { 00:20:02.512 "r_mbytes_per_sec": 0, 00:20:02.512 "rw_ios_per_sec": 0, 00:20:02.512 "rw_mbytes_per_sec": 0, 00:20:02.512 "w_mbytes_per_sec": 0 00:20:02.512 }, 00:20:02.512 "block_size": 512, 00:20:02.512 "claimed": false, 00:20:02.512 "driver_specific": { 00:20:02.512 "mp_policy": "active_passive", 00:20:02.512 "nvme": [ 00:20:02.512 { 00:20:02.512 "ctrlr_data": { 00:20:02.512 "ana_reporting": false, 00:20:02.512 "cntlid": 2, 00:20:02.512 "firmware_revision": "24.01.1", 00:20:02.512 "model_number": "SPDK bdev Controller", 00:20:02.512 "multi_ctrlr": true, 00:20:02.512 "oacs": { 00:20:02.512 "firmware": 0, 00:20:02.512 "format": 0, 00:20:02.512 "ns_manage": 0, 00:20:02.512 "security": 0 00:20:02.512 }, 00:20:02.512 "serial_number": "00000000000000000000", 00:20:02.512 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:02.512 "vendor_id": "0x8086" 00:20:02.512 }, 00:20:02.512 "ns_data": { 00:20:02.512 "can_share": true, 00:20:02.512 "id": 1 00:20:02.512 }, 00:20:02.512 "trid": { 00:20:02.512 "adrfam": "IPv4", 00:20:02.512 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:02.512 "traddr": "10.0.0.2", 00:20:02.512 "trsvcid": "4420", 00:20:02.512 "trtype": "TCP" 00:20:02.512 }, 00:20:02.512 "vs": { 00:20:02.512 "nvme_version": "1.3" 00:20:02.512 } 00:20:02.512 } 00:20:02.512 ] 00:20:02.512 }, 00:20:02.512 "name": "nvme0n1", 00:20:02.512 "num_blocks": 2097152, 00:20:02.512 "product_name": "NVMe disk", 00:20:02.512 "supported_io_types": { 00:20:02.512 "abort": true, 00:20:02.512 "compare": true, 00:20:02.512 "compare_and_write": true, 00:20:02.512 "flush": true, 00:20:02.512 "nvme_admin": true, 00:20:02.512 "nvme_io": true, 00:20:02.512 "read": true, 00:20:02.512 "reset": true, 00:20:02.512 "unmap": false, 00:20:02.512 "write": true, 00:20:02.512 "write_zeroes": true 00:20:02.512 }, 00:20:02.512 "uuid": "797ec045-bd9c-4f08-960e-fedcde961473", 00:20:02.512 "zoned": false 00:20:02.512 } 00:20:02.512 ] 00:20:02.512 10:12:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:02.512 10:12:33 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.512 10:12:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:02.512 10:12:33 -- common/autotest_common.sh@10 -- # set +x 00:20:02.512 10:12:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:02.772 10:12:33 -- host/async_init.sh@53 -- # mktemp 00:20:02.772 10:12:33 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.2yfizHWuXx 00:20:02.772 10:12:33 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:02.772 10:12:33 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.2yfizHWuXx 00:20:02.772 10:12:33 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:02.772 10:12:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:02.772 10:12:33 -- common/autotest_common.sh@10 -- # set +x 00:20:02.772 10:12:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:02.772 10:12:33 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:02.772 10:12:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:02.772 10:12:33 -- common/autotest_common.sh@10 -- # set +x 00:20:02.772 [2024-07-14 10:12:33.076384] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:02.772 [2024-07-14 10:12:33.076512] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:02.772 10:12:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:02.772 10:12:33 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2yfizHWuXx 00:20:02.772 10:12:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:02.772 10:12:33 -- common/autotest_common.sh@10 -- # set +x 00:20:02.772 10:12:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:02.772 10:12:33 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.2yfizHWuXx 00:20:02.772 10:12:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:02.772 10:12:33 -- common/autotest_common.sh@10 -- # set +x 00:20:02.772 [2024-07-14 10:12:33.092351] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:02.772 nvme0n1 00:20:02.772 10:12:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:02.772 10:12:33 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:02.772 10:12:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:02.772 10:12:33 -- common/autotest_common.sh@10 -- # set +x 00:20:02.772 [ 00:20:02.772 { 00:20:02.772 "aliases": [ 00:20:02.772 "797ec045-bd9c-4f08-960e-fedcde961473" 00:20:02.772 ], 00:20:02.772 "assigned_rate_limits": { 00:20:02.772 "r_mbytes_per_sec": 0, 00:20:02.772 "rw_ios_per_sec": 0, 00:20:02.772 "rw_mbytes_per_sec": 0, 00:20:02.772 "w_mbytes_per_sec": 0 00:20:02.772 }, 00:20:02.772 "block_size": 512, 00:20:02.772 "claimed": false, 00:20:02.772 "driver_specific": { 00:20:02.772 "mp_policy": "active_passive", 00:20:02.772 "nvme": [ 00:20:02.772 { 00:20:02.772 "ctrlr_data": { 00:20:02.772 "ana_reporting": false, 00:20:02.772 "cntlid": 3, 00:20:02.772 "firmware_revision": "24.01.1", 00:20:02.772 "model_number": "SPDK bdev Controller", 00:20:02.772 "multi_ctrlr": true, 00:20:02.772 "oacs": { 00:20:02.772 "firmware": 0, 00:20:02.772 "format": 0, 00:20:02.772 "ns_manage": 0, 00:20:02.772 "security": 0 00:20:02.772 }, 00:20:02.772 "serial_number": "00000000000000000000", 00:20:02.772 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:02.772 "vendor_id": "0x8086" 00:20:02.772 }, 00:20:02.772 "ns_data": { 00:20:02.772 "can_share": true, 00:20:02.772 "id": 1 00:20:02.772 }, 00:20:02.772 "trid": { 00:20:02.772 "adrfam": "IPv4", 00:20:02.772 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:02.772 "traddr": "10.0.0.2", 00:20:02.772 "trsvcid": "4421", 00:20:02.772 "trtype": "TCP" 00:20:02.772 }, 00:20:02.772 "vs": { 00:20:02.772 "nvme_version": "1.3" 00:20:02.772 } 00:20:02.772 } 00:20:02.772 ] 00:20:02.772 }, 00:20:02.772 "name": "nvme0n1", 00:20:02.772 "num_blocks": 2097152, 00:20:02.772 "product_name": "NVMe disk", 00:20:02.772 "supported_io_types": { 00:20:02.772 "abort": true, 00:20:02.772 "compare": true, 00:20:02.772 "compare_and_write": true, 00:20:02.772 "flush": true, 00:20:02.772 "nvme_admin": true, 00:20:02.772 "nvme_io": true, 00:20:02.772 "read": true, 00:20:02.772 "reset": true, 00:20:02.772 "unmap": false, 00:20:02.772 "write": true, 00:20:02.772 "write_zeroes": true 00:20:02.772 }, 00:20:02.772 "uuid": "797ec045-bd9c-4f08-960e-fedcde961473", 00:20:02.772 "zoned": false 00:20:02.772 } 00:20:02.772 ] 00:20:02.772 10:12:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:02.772 10:12:33 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:02.772 10:12:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:02.772 10:12:33 -- common/autotest_common.sh@10 -- # set +x 00:20:02.772 10:12:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:02.772 10:12:33 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.2yfizHWuXx 00:20:02.772 10:12:33 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:02.772 10:12:33 -- host/async_init.sh@78 -- # nvmftestfini 00:20:02.772 10:12:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:02.772 10:12:33 -- nvmf/common.sh@116 -- # sync 00:20:02.772 10:12:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:02.772 10:12:33 -- nvmf/common.sh@119 -- # set +e 00:20:02.772 10:12:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:02.772 10:12:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:02.772 rmmod nvme_tcp 00:20:02.772 rmmod nvme_fabrics 00:20:02.772 rmmod nvme_keyring 00:20:02.772 10:12:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:02.772 10:12:33 -- nvmf/common.sh@123 -- # set -e 00:20:02.772 10:12:33 -- nvmf/common.sh@124 -- # return 0 00:20:02.772 10:12:33 -- nvmf/common.sh@477 -- # '[' -n 92583 ']' 00:20:02.772 10:12:33 -- nvmf/common.sh@478 -- # killprocess 92583 00:20:02.772 10:12:33 -- common/autotest_common.sh@926 -- # '[' -z 92583 ']' 00:20:02.772 10:12:33 -- common/autotest_common.sh@930 -- # kill -0 92583 00:20:02.772 10:12:33 -- common/autotest_common.sh@931 -- # uname 00:20:03.031 10:12:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:03.031 10:12:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92583 00:20:03.031 10:12:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:03.031 10:12:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:03.031 10:12:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92583' 00:20:03.031 killing process with pid 92583 00:20:03.031 10:12:33 -- common/autotest_common.sh@945 -- # kill 92583 00:20:03.031 10:12:33 -- common/autotest_common.sh@950 -- # wait 92583 00:20:03.031 10:12:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:03.031 10:12:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:03.031 10:12:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:03.031 10:12:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:03.031 10:12:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:03.031 10:12:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.031 10:12:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:03.031 10:12:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.031 10:12:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:03.031 00:20:03.031 real 0m2.541s 00:20:03.031 user 0m2.385s 00:20:03.031 sys 0m0.594s 00:20:03.031 10:12:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:03.031 10:12:33 -- common/autotest_common.sh@10 -- # set +x 00:20:03.031 ************************************ 00:20:03.031 END TEST nvmf_async_init 00:20:03.031 ************************************ 00:20:03.289 10:12:33 -- nvmf/nvmf.sh@94 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:03.289 10:12:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:03.289 10:12:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:03.289 10:12:33 -- common/autotest_common.sh@10 -- # set +x 00:20:03.290 ************************************ 00:20:03.290 START TEST dma 00:20:03.290 ************************************ 00:20:03.290 10:12:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:03.290 * Looking for test storage... 00:20:03.290 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:03.290 10:12:33 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:03.290 10:12:33 -- nvmf/common.sh@7 -- # uname -s 00:20:03.290 10:12:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:03.290 10:12:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:03.290 10:12:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:03.290 10:12:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:03.290 10:12:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:03.290 10:12:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:03.290 10:12:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:03.290 10:12:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:03.290 10:12:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:03.290 10:12:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:03.290 10:12:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:20:03.290 10:12:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:20:03.290 10:12:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:03.290 10:12:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:03.290 10:12:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:03.290 10:12:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:03.290 10:12:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:03.290 10:12:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:03.290 10:12:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:03.290 10:12:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.290 10:12:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.290 10:12:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.290 10:12:33 -- paths/export.sh@5 -- # export PATH 00:20:03.290 10:12:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.290 10:12:33 -- nvmf/common.sh@46 -- # : 0 00:20:03.290 10:12:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:03.290 10:12:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:03.290 10:12:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:03.290 10:12:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:03.290 10:12:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:03.290 10:12:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:03.290 10:12:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:03.290 10:12:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:03.290 10:12:33 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:03.290 10:12:33 -- host/dma.sh@13 -- # exit 0 00:20:03.290 00:20:03.290 real 0m0.093s 00:20:03.290 user 0m0.039s 00:20:03.290 sys 0m0.061s 00:20:03.290 10:12:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:03.290 10:12:33 -- common/autotest_common.sh@10 -- # set +x 00:20:03.290 ************************************ 00:20:03.290 END TEST dma 00:20:03.290 ************************************ 00:20:03.290 10:12:33 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:03.290 10:12:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:03.290 10:12:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:03.290 10:12:33 -- common/autotest_common.sh@10 -- # set +x 00:20:03.290 ************************************ 00:20:03.290 START TEST nvmf_identify 00:20:03.290 ************************************ 00:20:03.290 10:12:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:03.290 * Looking for test storage... 00:20:03.548 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:03.548 10:12:33 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:03.548 10:12:33 -- nvmf/common.sh@7 -- # uname -s 00:20:03.548 10:12:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:03.548 10:12:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:03.548 10:12:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:03.548 10:12:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:03.548 10:12:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:03.549 10:12:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:03.549 10:12:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:03.549 10:12:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:03.549 10:12:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:03.549 10:12:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:03.549 10:12:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:20:03.549 10:12:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:20:03.549 10:12:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:03.549 10:12:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:03.549 10:12:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:03.549 10:12:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:03.549 10:12:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:03.549 10:12:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:03.549 10:12:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:03.549 10:12:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.549 10:12:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.549 10:12:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.549 10:12:33 -- paths/export.sh@5 -- # export PATH 00:20:03.549 10:12:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.549 10:12:33 -- nvmf/common.sh@46 -- # : 0 00:20:03.549 10:12:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:03.549 10:12:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:03.549 10:12:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:03.549 10:12:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:03.549 10:12:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:03.549 10:12:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:03.549 10:12:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:03.549 10:12:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:03.549 10:12:33 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:03.549 10:12:33 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:03.549 10:12:33 -- host/identify.sh@14 -- # nvmftestinit 00:20:03.549 10:12:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:03.549 10:12:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:03.549 10:12:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:03.549 10:12:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:03.549 10:12:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:03.549 10:12:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.549 10:12:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:03.549 10:12:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.549 10:12:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:03.549 10:12:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:03.549 10:12:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:03.549 10:12:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:03.549 10:12:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:03.549 10:12:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:03.549 10:12:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:03.549 10:12:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:03.549 10:12:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:03.549 10:12:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:03.549 10:12:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:03.549 10:12:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:03.549 10:12:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:03.549 10:12:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:03.549 10:12:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:03.549 10:12:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:03.549 10:12:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:03.549 10:12:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:03.549 10:12:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:03.549 10:12:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:03.549 Cannot find device "nvmf_tgt_br" 00:20:03.549 10:12:33 -- nvmf/common.sh@154 -- # true 00:20:03.549 10:12:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:03.549 Cannot find device "nvmf_tgt_br2" 00:20:03.549 10:12:33 -- nvmf/common.sh@155 -- # true 00:20:03.549 10:12:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:03.549 10:12:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:03.549 Cannot find device "nvmf_tgt_br" 00:20:03.549 10:12:33 -- nvmf/common.sh@157 -- # true 00:20:03.549 10:12:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:03.549 Cannot find device "nvmf_tgt_br2" 00:20:03.549 10:12:33 -- nvmf/common.sh@158 -- # true 00:20:03.549 10:12:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:03.549 10:12:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:03.549 10:12:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:03.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:03.549 10:12:34 -- nvmf/common.sh@161 -- # true 00:20:03.549 10:12:34 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:03.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:03.549 10:12:34 -- nvmf/common.sh@162 -- # true 00:20:03.549 10:12:34 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:03.549 10:12:34 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:03.549 10:12:34 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:03.549 10:12:34 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:03.549 10:12:34 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:03.549 10:12:34 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:03.808 10:12:34 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:03.808 10:12:34 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:03.808 10:12:34 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:03.808 10:12:34 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:03.808 10:12:34 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:03.808 10:12:34 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:03.808 10:12:34 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:03.808 10:12:34 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:03.808 10:12:34 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:03.808 10:12:34 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:03.808 10:12:34 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:03.808 10:12:34 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:03.808 10:12:34 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:03.808 10:12:34 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:03.808 10:12:34 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:03.808 10:12:34 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:03.808 10:12:34 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:03.808 10:12:34 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:03.808 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:03.808 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:20:03.808 00:20:03.808 --- 10.0.0.2 ping statistics --- 00:20:03.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.808 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:20:03.808 10:12:34 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:03.808 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:03.808 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:20:03.808 00:20:03.808 --- 10.0.0.3 ping statistics --- 00:20:03.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.808 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:20:03.808 10:12:34 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:03.808 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:03.808 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:20:03.808 00:20:03.808 --- 10.0.0.1 ping statistics --- 00:20:03.808 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.808 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:20:03.808 10:12:34 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:03.808 10:12:34 -- nvmf/common.sh@421 -- # return 0 00:20:03.808 10:12:34 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:03.808 10:12:34 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:03.808 10:12:34 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:03.808 10:12:34 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:03.808 10:12:34 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:03.808 10:12:34 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:03.808 10:12:34 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:03.808 10:12:34 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:03.808 10:12:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:03.808 10:12:34 -- common/autotest_common.sh@10 -- # set +x 00:20:03.808 10:12:34 -- host/identify.sh@19 -- # nvmfpid=92852 00:20:03.808 10:12:34 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:03.809 10:12:34 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:03.809 10:12:34 -- host/identify.sh@23 -- # waitforlisten 92852 00:20:03.809 10:12:34 -- common/autotest_common.sh@819 -- # '[' -z 92852 ']' 00:20:03.809 10:12:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.809 10:12:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:03.809 10:12:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.809 10:12:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:03.809 10:12:34 -- common/autotest_common.sh@10 -- # set +x 00:20:03.809 [2024-07-14 10:12:34.324823] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:03.809 [2024-07-14 10:12:34.324928] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:04.067 [2024-07-14 10:12:34.466996] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:04.067 [2024-07-14 10:12:34.556845] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:04.067 [2024-07-14 10:12:34.557166] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:04.067 [2024-07-14 10:12:34.557312] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:04.067 [2024-07-14 10:12:34.557382] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:04.067 [2024-07-14 10:12:34.557517] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:04.067 [2024-07-14 10:12:34.557623] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:04.067 [2024-07-14 10:12:34.558302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:04.067 [2024-07-14 10:12:34.558322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:05.003 10:12:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:05.003 10:12:35 -- common/autotest_common.sh@852 -- # return 0 00:20:05.003 10:12:35 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:05.003 10:12:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:05.003 10:12:35 -- common/autotest_common.sh@10 -- # set +x 00:20:05.003 [2024-07-14 10:12:35.315373] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:05.003 10:12:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:05.003 10:12:35 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:05.003 10:12:35 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:05.003 10:12:35 -- common/autotest_common.sh@10 -- # set +x 00:20:05.003 10:12:35 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:05.003 10:12:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:05.003 10:12:35 -- common/autotest_common.sh@10 -- # set +x 00:20:05.003 Malloc0 00:20:05.003 10:12:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:05.004 10:12:35 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:05.004 10:12:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:05.004 10:12:35 -- common/autotest_common.sh@10 -- # set +x 00:20:05.004 10:12:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:05.004 10:12:35 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:05.004 10:12:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:05.004 10:12:35 -- common/autotest_common.sh@10 -- # set +x 00:20:05.004 10:12:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:05.004 10:12:35 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:05.004 10:12:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:05.004 10:12:35 -- common/autotest_common.sh@10 -- # set +x 00:20:05.004 [2024-07-14 10:12:35.425541] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:05.004 10:12:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:05.004 10:12:35 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:05.004 10:12:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:05.004 10:12:35 -- common/autotest_common.sh@10 -- # set +x 00:20:05.004 10:12:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:05.004 10:12:35 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:05.004 10:12:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:05.004 10:12:35 -- common/autotest_common.sh@10 -- # set +x 00:20:05.004 [2024-07-14 10:12:35.441320] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:05.004 [ 00:20:05.004 { 00:20:05.004 "allow_any_host": true, 00:20:05.004 "hosts": [], 00:20:05.004 "listen_addresses": [ 00:20:05.004 { 00:20:05.004 "adrfam": "IPv4", 00:20:05.004 "traddr": "10.0.0.2", 00:20:05.004 "transport": "TCP", 00:20:05.004 "trsvcid": "4420", 00:20:05.004 "trtype": "TCP" 00:20:05.004 } 00:20:05.004 ], 00:20:05.004 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:05.004 "subtype": "Discovery" 00:20:05.004 }, 00:20:05.004 { 00:20:05.004 "allow_any_host": true, 00:20:05.004 "hosts": [], 00:20:05.004 "listen_addresses": [ 00:20:05.004 { 00:20:05.004 "adrfam": "IPv4", 00:20:05.004 "traddr": "10.0.0.2", 00:20:05.004 "transport": "TCP", 00:20:05.004 "trsvcid": "4420", 00:20:05.004 "trtype": "TCP" 00:20:05.004 } 00:20:05.004 ], 00:20:05.004 "max_cntlid": 65519, 00:20:05.004 "max_namespaces": 32, 00:20:05.004 "min_cntlid": 1, 00:20:05.004 "model_number": "SPDK bdev Controller", 00:20:05.004 "namespaces": [ 00:20:05.004 { 00:20:05.004 "bdev_name": "Malloc0", 00:20:05.004 "eui64": "ABCDEF0123456789", 00:20:05.004 "name": "Malloc0", 00:20:05.004 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:05.004 "nsid": 1, 00:20:05.004 "uuid": "83bb02a4-9a69-4328-a1c9-e7899125d9ce" 00:20:05.004 } 00:20:05.004 ], 00:20:05.004 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:05.004 "serial_number": "SPDK00000000000001", 00:20:05.004 "subtype": "NVMe" 00:20:05.004 } 00:20:05.004 ] 00:20:05.004 10:12:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:05.004 10:12:35 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:05.004 [2024-07-14 10:12:35.474476] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:05.004 [2024-07-14 10:12:35.474529] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92905 ] 00:20:05.268 [2024-07-14 10:12:35.613919] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:05.268 [2024-07-14 10:12:35.614003] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:05.268 [2024-07-14 10:12:35.614010] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:05.268 [2024-07-14 10:12:35.614023] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:05.268 [2024-07-14 10:12:35.614033] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:05.268 [2024-07-14 10:12:35.614174] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:05.268 [2024-07-14 10:12:35.614282] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x12a7d70 0 00:20:05.268 [2024-07-14 10:12:35.619279] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:05.268 [2024-07-14 10:12:35.619322] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:05.268 [2024-07-14 10:12:35.619345] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:05.268 [2024-07-14 10:12:35.619350] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:05.268 [2024-07-14 10:12:35.619397] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.268 [2024-07-14 10:12:35.619404] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.268 [2024-07-14 10:12:35.619409] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a7d70) 00:20:05.268 [2024-07-14 10:12:35.619424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:05.268 [2024-07-14 10:12:35.619459] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f15f0, cid 0, qid 0 00:20:05.268 [2024-07-14 10:12:35.627317] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.268 [2024-07-14 10:12:35.627340] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.268 [2024-07-14 10:12:35.627362] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.268 [2024-07-14 10:12:35.627367] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f15f0) on tqpair=0x12a7d70 00:20:05.268 [2024-07-14 10:12:35.627381] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:05.268 [2024-07-14 10:12:35.627389] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:05.268 [2024-07-14 10:12:35.627395] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:05.268 [2024-07-14 10:12:35.627412] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.268 [2024-07-14 10:12:35.627418] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.268 [2024-07-14 10:12:35.627422] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a7d70) 00:20:05.268 [2024-07-14 10:12:35.627432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.268 [2024-07-14 10:12:35.627461] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f15f0, cid 0, qid 0 00:20:05.268 [2024-07-14 10:12:35.627537] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.268 [2024-07-14 10:12:35.627560] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.268 [2024-07-14 10:12:35.627564] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.268 [2024-07-14 10:12:35.627568] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f15f0) on tqpair=0x12a7d70 00:20:05.268 [2024-07-14 10:12:35.627575] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:05.268 [2024-07-14 10:12:35.627582] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:05.268 [2024-07-14 10:12:35.627604] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.268 [2024-07-14 10:12:35.627624] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.268 [2024-07-14 10:12:35.627628] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a7d70) 00:20:05.268 [2024-07-14 10:12:35.627652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.268 [2024-07-14 10:12:35.627688] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f15f0, cid 0, qid 0 00:20:05.268 [2024-07-14 10:12:35.627740] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.268 [2024-07-14 10:12:35.627747] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.268 [2024-07-14 10:12:35.627751] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.268 [2024-07-14 10:12:35.627755] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f15f0) on tqpair=0x12a7d70 00:20:05.268 [2024-07-14 10:12:35.627761] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:05.268 [2024-07-14 10:12:35.627770] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:05.268 [2024-07-14 10:12:35.627777] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.268 [2024-07-14 10:12:35.627781] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.268 [2024-07-14 10:12:35.627785] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a7d70) 00:20:05.268 [2024-07-14 10:12:35.627792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.268 [2024-07-14 10:12:35.627811] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f15f0, cid 0, qid 0 00:20:05.268 [2024-07-14 10:12:35.627861] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.268 [2024-07-14 10:12:35.627868] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.268 [2024-07-14 10:12:35.627871] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.268 [2024-07-14 10:12:35.627876] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f15f0) on tqpair=0x12a7d70 00:20:05.268 [2024-07-14 10:12:35.627883] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:05.268 [2024-07-14 10:12:35.627893] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.268 [2024-07-14 10:12:35.627897] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.268 [2024-07-14 10:12:35.627901] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a7d70) 00:20:05.268 [2024-07-14 10:12:35.627908] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.268 [2024-07-14 10:12:35.627926] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f15f0, cid 0, qid 0 00:20:05.268 [2024-07-14 10:12:35.627976] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.268 [2024-07-14 10:12:35.627982] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.268 [2024-07-14 10:12:35.627986] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.268 [2024-07-14 10:12:35.627990] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f15f0) on tqpair=0x12a7d70 00:20:05.269 [2024-07-14 10:12:35.627996] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:05.269 [2024-07-14 10:12:35.628001] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:05.269 [2024-07-14 10:12:35.628009] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:05.269 [2024-07-14 10:12:35.628114] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:05.269 [2024-07-14 10:12:35.628120] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:05.269 [2024-07-14 10:12:35.628129] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.269 [2024-07-14 10:12:35.628133] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.269 [2024-07-14 10:12:35.628137] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a7d70) 00:20:05.269 [2024-07-14 10:12:35.628144] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.269 [2024-07-14 10:12:35.628163] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f15f0, cid 0, qid 0 00:20:05.269 [2024-07-14 10:12:35.628230] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.269 [2024-07-14 10:12:35.628238] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.269 [2024-07-14 10:12:35.628242] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.269 [2024-07-14 10:12:35.628246] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f15f0) on tqpair=0x12a7d70 00:20:05.269 [2024-07-14 10:12:35.628253] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:05.269 [2024-07-14 10:12:35.628263] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.269 [2024-07-14 10:12:35.628268] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.269 [2024-07-14 10:12:35.628272] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a7d70) 00:20:05.269 [2024-07-14 10:12:35.628279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.269 [2024-07-14 10:12:35.628298] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f15f0, cid 0, qid 0 00:20:05.269 [2024-07-14 10:12:35.628364] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.269 [2024-07-14 10:12:35.628373] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.269 [2024-07-14 10:12:35.628377] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.269 [2024-07-14 10:12:35.628381] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f15f0) on tqpair=0x12a7d70 00:20:05.269 [2024-07-14 10:12:35.628387] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:05.269 [2024-07-14 10:12:35.628393] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:05.269 [2024-07-14 10:12:35.628401] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:05.269 [2024-07-14 10:12:35.628416] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:05.269 [2024-07-14 10:12:35.628426] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.269 [2024-07-14 10:12:35.628431] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.269 [2024-07-14 10:12:35.628435] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a7d70) 00:20:05.269 [2024-07-14 10:12:35.628443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.269 [2024-07-14 10:12:35.628465] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f15f0, cid 0, qid 0 00:20:05.269 [2024-07-14 10:12:35.628561] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:05.269 [2024-07-14 10:12:35.628568] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:05.269 [2024-07-14 10:12:35.628587] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:05.269 [2024-07-14 10:12:35.628592] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12a7d70): datao=0, datal=4096, cccid=0 00:20:05.269 [2024-07-14 10:12:35.628597] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12f15f0) on tqpair(0x12a7d70): expected_datao=0, payload_size=4096 00:20:05.269 [2024-07-14 10:12:35.628606] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:05.269 [2024-07-14 10:12:35.628611] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:05.269 [2024-07-14 10:12:35.628619] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.269 [2024-07-14 10:12:35.628625] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.269 [2024-07-14 10:12:35.628629] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.269 [2024-07-14 10:12:35.628632] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f15f0) on tqpair=0x12a7d70 00:20:05.269 [2024-07-14 10:12:35.628642] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:05.269 [2024-07-14 10:12:35.628647] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:05.269 [2024-07-14 10:12:35.628652] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:05.269 [2024-07-14 10:12:35.628658] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:05.269 [2024-07-14 10:12:35.628662] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:05.269 [2024-07-14 10:12:35.628667] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:05.269 [2024-07-14 10:12:35.628680] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:05.269 [2024-07-14 10:12:35.628688] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.269 [2024-07-14 10:12:35.628693] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.269 [2024-07-14 10:12:35.628696] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a7d70) 00:20:05.269 [2024-07-14 10:12:35.628704] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:05.269 [2024-07-14 10:12:35.628725] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f15f0, cid 0, qid 0 00:20:05.269 [2024-07-14 10:12:35.628791] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.269 [2024-07-14 10:12:35.628798] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.269 [2024-07-14 10:12:35.628802] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.269 [2024-07-14 10:12:35.628806] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f15f0) on tqpair=0x12a7d70 00:20:05.269 [2024-07-14 10:12:35.628815] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.269 [2024-07-14 10:12:35.628819] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.269 [2024-07-14 10:12:35.628823] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x12a7d70) 00:20:05.269 [2024-07-14 10:12:35.628829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.269 [2024-07-14 10:12:35.628836] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.269 [2024-07-14 10:12:35.628840] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.269 [2024-07-14 10:12:35.628843] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x12a7d70) 00:20:05.269 [2024-07-14 10:12:35.628849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.269 [2024-07-14 10:12:35.628855] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.269 [2024-07-14 10:12:35.628859] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.269 [2024-07-14 10:12:35.628863] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x12a7d70) 00:20:05.269 [2024-07-14 10:12:35.628869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.269 [2024-07-14 10:12:35.628875] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.269 [2024-07-14 10:12:35.628878] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.269 [2024-07-14 10:12:35.628882] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.269 [2024-07-14 10:12:35.628888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.269 [2024-07-14 10:12:35.628893] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:05.269 [2024-07-14 10:12:35.628905] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:05.269 [2024-07-14 10:12:35.628913] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.269 [2024-07-14 10:12:35.628917] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.269 [2024-07-14 10:12:35.628920] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12a7d70) 00:20:05.269 [2024-07-14 10:12:35.628927] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.269 [2024-07-14 10:12:35.628948] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f15f0, cid 0, qid 0 00:20:05.269 [2024-07-14 10:12:35.628955] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1750, cid 1, qid 0 00:20:05.269 [2024-07-14 10:12:35.628960] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f18b0, cid 2, qid 0 00:20:05.269 [2024-07-14 10:12:35.628965] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.269 [2024-07-14 10:12:35.628970] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1b70, cid 4, qid 0 00:20:05.269 [2024-07-14 10:12:35.629059] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.269 [2024-07-14 10:12:35.629066] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.269 [2024-07-14 10:12:35.629070] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.269 [2024-07-14 10:12:35.629074] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1b70) on tqpair=0x12a7d70 00:20:05.269 [2024-07-14 10:12:35.629080] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:05.269 [2024-07-14 10:12:35.629086] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:05.269 [2024-07-14 10:12:35.629096] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.269 [2024-07-14 10:12:35.629101] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.269 [2024-07-14 10:12:35.629104] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12a7d70) 00:20:05.269 [2024-07-14 10:12:35.629112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.269 [2024-07-14 10:12:35.629131] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1b70, cid 4, qid 0 00:20:05.269 [2024-07-14 10:12:35.629194] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:05.269 [2024-07-14 10:12:35.629201] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:05.269 [2024-07-14 10:12:35.629205] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:05.269 [2024-07-14 10:12:35.629209] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12a7d70): datao=0, datal=4096, cccid=4 00:20:05.269 [2024-07-14 10:12:35.629242] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12f1b70) on tqpair(0x12a7d70): expected_datao=0, payload_size=4096 00:20:05.269 [2024-07-14 10:12:35.629252] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:05.269 [2024-07-14 10:12:35.629256] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:05.270 [2024-07-14 10:12:35.629265] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.270 [2024-07-14 10:12:35.629271] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.270 [2024-07-14 10:12:35.629275] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.270 [2024-07-14 10:12:35.629279] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1b70) on tqpair=0x12a7d70 00:20:05.270 [2024-07-14 10:12:35.629294] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:05.270 [2024-07-14 10:12:35.629321] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.270 [2024-07-14 10:12:35.629327] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.270 [2024-07-14 10:12:35.629331] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12a7d70) 00:20:05.270 [2024-07-14 10:12:35.629338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.270 [2024-07-14 10:12:35.629346] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.270 [2024-07-14 10:12:35.629350] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.270 [2024-07-14 10:12:35.629354] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x12a7d70) 00:20:05.270 [2024-07-14 10:12:35.629360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.270 [2024-07-14 10:12:35.629388] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1b70, cid 4, qid 0 00:20:05.270 [2024-07-14 10:12:35.629395] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1cd0, cid 5, qid 0 00:20:05.270 [2024-07-14 10:12:35.629507] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:05.270 [2024-07-14 10:12:35.629514] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:05.270 [2024-07-14 10:12:35.629518] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:05.270 [2024-07-14 10:12:35.629522] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12a7d70): datao=0, datal=1024, cccid=4 00:20:05.270 [2024-07-14 10:12:35.629527] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12f1b70) on tqpair(0x12a7d70): expected_datao=0, payload_size=1024 00:20:05.270 [2024-07-14 10:12:35.629534] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:05.270 [2024-07-14 10:12:35.629538] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:05.270 [2024-07-14 10:12:35.629544] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.270 [2024-07-14 10:12:35.629550] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.270 [2024-07-14 10:12:35.629554] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.270 [2024-07-14 10:12:35.629558] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1cd0) on tqpair=0x12a7d70 00:20:05.270 [2024-07-14 10:12:35.670266] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.270 [2024-07-14 10:12:35.670292] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.270 [2024-07-14 10:12:35.670312] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.270 [2024-07-14 10:12:35.670317] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1b70) on tqpair=0x12a7d70 00:20:05.270 [2024-07-14 10:12:35.670334] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.270 [2024-07-14 10:12:35.670339] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.270 [2024-07-14 10:12:35.670343] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12a7d70) 00:20:05.270 [2024-07-14 10:12:35.670352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.270 [2024-07-14 10:12:35.670385] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1b70, cid 4, qid 0 00:20:05.270 [2024-07-14 10:12:35.670461] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:05.270 [2024-07-14 10:12:35.670468] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:05.270 [2024-07-14 10:12:35.670472] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:05.270 [2024-07-14 10:12:35.670475] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12a7d70): datao=0, datal=3072, cccid=4 00:20:05.270 [2024-07-14 10:12:35.670480] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12f1b70) on tqpair(0x12a7d70): expected_datao=0, payload_size=3072 00:20:05.270 [2024-07-14 10:12:35.670489] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:05.270 [2024-07-14 10:12:35.670493] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:05.270 [2024-07-14 10:12:35.670501] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.270 [2024-07-14 10:12:35.670507] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.270 [2024-07-14 10:12:35.670510] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.270 [2024-07-14 10:12:35.670514] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1b70) on tqpair=0x12a7d70 00:20:05.270 [2024-07-14 10:12:35.670525] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.270 [2024-07-14 10:12:35.670529] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.270 [2024-07-14 10:12:35.670549] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x12a7d70) 00:20:05.270 [2024-07-14 10:12:35.670556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.270 [2024-07-14 10:12:35.670598] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1b70, cid 4, qid 0 00:20:05.270 [2024-07-14 10:12:35.670683] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:05.270 [2024-07-14 10:12:35.670690] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:05.270 [2024-07-14 10:12:35.670694] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:05.270 [2024-07-14 10:12:35.670697] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x12a7d70): datao=0, datal=8, cccid=4 00:20:05.270 [2024-07-14 10:12:35.670702] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12f1b70) on tqpair(0x12a7d70): expected_datao=0, payload_size=8 00:20:05.270 [2024-07-14 10:12:35.670709] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:05.270 [2024-07-14 10:12:35.670713] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:05.270 [2024-07-14 10:12:35.715280] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.270 [2024-07-14 10:12:35.715310] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.270 [2024-07-14 10:12:35.715331] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.270 [2024-07-14 10:12:35.715336] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1b70) on tqpair=0x12a7d70 00:20:05.270 ===================================================== 00:20:05.270 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:05.270 ===================================================== 00:20:05.270 Controller Capabilities/Features 00:20:05.270 ================================ 00:20:05.270 Vendor ID: 0000 00:20:05.270 Subsystem Vendor ID: 0000 00:20:05.270 Serial Number: .................... 00:20:05.270 Model Number: ........................................ 00:20:05.270 Firmware Version: 24.01.1 00:20:05.270 Recommended Arb Burst: 0 00:20:05.270 IEEE OUI Identifier: 00 00 00 00:20:05.270 Multi-path I/O 00:20:05.270 May have multiple subsystem ports: No 00:20:05.270 May have multiple controllers: No 00:20:05.270 Associated with SR-IOV VF: No 00:20:05.270 Max Data Transfer Size: 131072 00:20:05.270 Max Number of Namespaces: 0 00:20:05.270 Max Number of I/O Queues: 1024 00:20:05.270 NVMe Specification Version (VS): 1.3 00:20:05.270 NVMe Specification Version (Identify): 1.3 00:20:05.270 Maximum Queue Entries: 128 00:20:05.270 Contiguous Queues Required: Yes 00:20:05.270 Arbitration Mechanisms Supported 00:20:05.270 Weighted Round Robin: Not Supported 00:20:05.270 Vendor Specific: Not Supported 00:20:05.270 Reset Timeout: 15000 ms 00:20:05.270 Doorbell Stride: 4 bytes 00:20:05.270 NVM Subsystem Reset: Not Supported 00:20:05.270 Command Sets Supported 00:20:05.270 NVM Command Set: Supported 00:20:05.270 Boot Partition: Not Supported 00:20:05.270 Memory Page Size Minimum: 4096 bytes 00:20:05.270 Memory Page Size Maximum: 4096 bytes 00:20:05.270 Persistent Memory Region: Not Supported 00:20:05.270 Optional Asynchronous Events Supported 00:20:05.270 Namespace Attribute Notices: Not Supported 00:20:05.270 Firmware Activation Notices: Not Supported 00:20:05.270 ANA Change Notices: Not Supported 00:20:05.270 PLE Aggregate Log Change Notices: Not Supported 00:20:05.270 LBA Status Info Alert Notices: Not Supported 00:20:05.270 EGE Aggregate Log Change Notices: Not Supported 00:20:05.270 Normal NVM Subsystem Shutdown event: Not Supported 00:20:05.270 Zone Descriptor Change Notices: Not Supported 00:20:05.270 Discovery Log Change Notices: Supported 00:20:05.270 Controller Attributes 00:20:05.270 128-bit Host Identifier: Not Supported 00:20:05.270 Non-Operational Permissive Mode: Not Supported 00:20:05.270 NVM Sets: Not Supported 00:20:05.270 Read Recovery Levels: Not Supported 00:20:05.270 Endurance Groups: Not Supported 00:20:05.270 Predictable Latency Mode: Not Supported 00:20:05.270 Traffic Based Keep ALive: Not Supported 00:20:05.270 Namespace Granularity: Not Supported 00:20:05.270 SQ Associations: Not Supported 00:20:05.270 UUID List: Not Supported 00:20:05.270 Multi-Domain Subsystem: Not Supported 00:20:05.270 Fixed Capacity Management: Not Supported 00:20:05.270 Variable Capacity Management: Not Supported 00:20:05.270 Delete Endurance Group: Not Supported 00:20:05.270 Delete NVM Set: Not Supported 00:20:05.270 Extended LBA Formats Supported: Not Supported 00:20:05.270 Flexible Data Placement Supported: Not Supported 00:20:05.270 00:20:05.270 Controller Memory Buffer Support 00:20:05.270 ================================ 00:20:05.270 Supported: No 00:20:05.270 00:20:05.270 Persistent Memory Region Support 00:20:05.270 ================================ 00:20:05.270 Supported: No 00:20:05.270 00:20:05.270 Admin Command Set Attributes 00:20:05.270 ============================ 00:20:05.270 Security Send/Receive: Not Supported 00:20:05.270 Format NVM: Not Supported 00:20:05.270 Firmware Activate/Download: Not Supported 00:20:05.270 Namespace Management: Not Supported 00:20:05.270 Device Self-Test: Not Supported 00:20:05.270 Directives: Not Supported 00:20:05.270 NVMe-MI: Not Supported 00:20:05.270 Virtualization Management: Not Supported 00:20:05.270 Doorbell Buffer Config: Not Supported 00:20:05.270 Get LBA Status Capability: Not Supported 00:20:05.270 Command & Feature Lockdown Capability: Not Supported 00:20:05.270 Abort Command Limit: 1 00:20:05.270 Async Event Request Limit: 4 00:20:05.270 Number of Firmware Slots: N/A 00:20:05.270 Firmware Slot 1 Read-Only: N/A 00:20:05.270 Firmware Activation Without Reset: N/A 00:20:05.270 Multiple Update Detection Support: N/A 00:20:05.271 Firmware Update Granularity: No Information Provided 00:20:05.271 Per-Namespace SMART Log: No 00:20:05.271 Asymmetric Namespace Access Log Page: Not Supported 00:20:05.271 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:05.271 Command Effects Log Page: Not Supported 00:20:05.271 Get Log Page Extended Data: Supported 00:20:05.271 Telemetry Log Pages: Not Supported 00:20:05.271 Persistent Event Log Pages: Not Supported 00:20:05.271 Supported Log Pages Log Page: May Support 00:20:05.271 Commands Supported & Effects Log Page: Not Supported 00:20:05.271 Feature Identifiers & Effects Log Page:May Support 00:20:05.271 NVMe-MI Commands & Effects Log Page: May Support 00:20:05.271 Data Area 4 for Telemetry Log: Not Supported 00:20:05.271 Error Log Page Entries Supported: 128 00:20:05.271 Keep Alive: Not Supported 00:20:05.271 00:20:05.271 NVM Command Set Attributes 00:20:05.271 ========================== 00:20:05.271 Submission Queue Entry Size 00:20:05.271 Max: 1 00:20:05.271 Min: 1 00:20:05.271 Completion Queue Entry Size 00:20:05.271 Max: 1 00:20:05.271 Min: 1 00:20:05.271 Number of Namespaces: 0 00:20:05.271 Compare Command: Not Supported 00:20:05.271 Write Uncorrectable Command: Not Supported 00:20:05.271 Dataset Management Command: Not Supported 00:20:05.271 Write Zeroes Command: Not Supported 00:20:05.271 Set Features Save Field: Not Supported 00:20:05.271 Reservations: Not Supported 00:20:05.271 Timestamp: Not Supported 00:20:05.271 Copy: Not Supported 00:20:05.271 Volatile Write Cache: Not Present 00:20:05.271 Atomic Write Unit (Normal): 1 00:20:05.271 Atomic Write Unit (PFail): 1 00:20:05.271 Atomic Compare & Write Unit: 1 00:20:05.271 Fused Compare & Write: Supported 00:20:05.271 Scatter-Gather List 00:20:05.271 SGL Command Set: Supported 00:20:05.271 SGL Keyed: Supported 00:20:05.271 SGL Bit Bucket Descriptor: Not Supported 00:20:05.271 SGL Metadata Pointer: Not Supported 00:20:05.271 Oversized SGL: Not Supported 00:20:05.271 SGL Metadata Address: Not Supported 00:20:05.271 SGL Offset: Supported 00:20:05.271 Transport SGL Data Block: Not Supported 00:20:05.271 Replay Protected Memory Block: Not Supported 00:20:05.271 00:20:05.271 Firmware Slot Information 00:20:05.271 ========================= 00:20:05.271 Active slot: 0 00:20:05.271 00:20:05.271 00:20:05.271 Error Log 00:20:05.271 ========= 00:20:05.271 00:20:05.271 Active Namespaces 00:20:05.271 ================= 00:20:05.271 Discovery Log Page 00:20:05.271 ================== 00:20:05.271 Generation Counter: 2 00:20:05.271 Number of Records: 2 00:20:05.271 Record Format: 0 00:20:05.271 00:20:05.271 Discovery Log Entry 0 00:20:05.271 ---------------------- 00:20:05.271 Transport Type: 3 (TCP) 00:20:05.271 Address Family: 1 (IPv4) 00:20:05.271 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:05.271 Entry Flags: 00:20:05.271 Duplicate Returned Information: 1 00:20:05.271 Explicit Persistent Connection Support for Discovery: 1 00:20:05.271 Transport Requirements: 00:20:05.271 Secure Channel: Not Required 00:20:05.271 Port ID: 0 (0x0000) 00:20:05.271 Controller ID: 65535 (0xffff) 00:20:05.271 Admin Max SQ Size: 128 00:20:05.271 Transport Service Identifier: 4420 00:20:05.271 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:05.271 Transport Address: 10.0.0.2 00:20:05.271 Discovery Log Entry 1 00:20:05.271 ---------------------- 00:20:05.271 Transport Type: 3 (TCP) 00:20:05.271 Address Family: 1 (IPv4) 00:20:05.271 Subsystem Type: 2 (NVM Subsystem) 00:20:05.271 Entry Flags: 00:20:05.271 Duplicate Returned Information: 0 00:20:05.271 Explicit Persistent Connection Support for Discovery: 0 00:20:05.271 Transport Requirements: 00:20:05.271 Secure Channel: Not Required 00:20:05.271 Port ID: 0 (0x0000) 00:20:05.271 Controller ID: 65535 (0xffff) 00:20:05.271 Admin Max SQ Size: 128 00:20:05.271 Transport Service Identifier: 4420 00:20:05.271 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:05.271 Transport Address: 10.0.0.2 [2024-07-14 10:12:35.715465] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:05.271 [2024-07-14 10:12:35.715483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.271 [2024-07-14 10:12:35.715491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.271 [2024-07-14 10:12:35.715497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.271 [2024-07-14 10:12:35.715503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.271 [2024-07-14 10:12:35.715515] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.271 [2024-07-14 10:12:35.715520] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.271 [2024-07-14 10:12:35.715523] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.271 [2024-07-14 10:12:35.715533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.271 [2024-07-14 10:12:35.715559] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.271 [2024-07-14 10:12:35.715651] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.271 [2024-07-14 10:12:35.715658] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.271 [2024-07-14 10:12:35.715662] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.271 [2024-07-14 10:12:35.715666] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1a10) on tqpair=0x12a7d70 00:20:05.271 [2024-07-14 10:12:35.715675] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.271 [2024-07-14 10:12:35.715680] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.271 [2024-07-14 10:12:35.715683] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.271 [2024-07-14 10:12:35.715691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.271 [2024-07-14 10:12:35.715715] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.271 [2024-07-14 10:12:35.715789] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.271 [2024-07-14 10:12:35.715795] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.271 [2024-07-14 10:12:35.715799] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.271 [2024-07-14 10:12:35.715803] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1a10) on tqpair=0x12a7d70 00:20:05.271 [2024-07-14 10:12:35.715809] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:05.271 [2024-07-14 10:12:35.715814] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:05.271 [2024-07-14 10:12:35.715824] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.271 [2024-07-14 10:12:35.715828] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.271 [2024-07-14 10:12:35.715832] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.271 [2024-07-14 10:12:35.715839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.271 [2024-07-14 10:12:35.715858] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.271 [2024-07-14 10:12:35.715908] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.271 [2024-07-14 10:12:35.715914] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.271 [2024-07-14 10:12:35.715918] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.271 [2024-07-14 10:12:35.715922] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1a10) on tqpair=0x12a7d70 00:20:05.271 [2024-07-14 10:12:35.715934] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.271 [2024-07-14 10:12:35.715938] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.271 [2024-07-14 10:12:35.715942] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.271 [2024-07-14 10:12:35.715949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.271 [2024-07-14 10:12:35.715967] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.271 [2024-07-14 10:12:35.716020] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.271 [2024-07-14 10:12:35.716026] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.271 [2024-07-14 10:12:35.716030] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.271 [2024-07-14 10:12:35.716034] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1a10) on tqpair=0x12a7d70 00:20:05.271 [2024-07-14 10:12:35.716045] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.271 [2024-07-14 10:12:35.716049] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.271 [2024-07-14 10:12:35.716053] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.271 [2024-07-14 10:12:35.716060] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.271 [2024-07-14 10:12:35.716078] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.271 [2024-07-14 10:12:35.716130] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.271 [2024-07-14 10:12:35.716137] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.271 [2024-07-14 10:12:35.716141] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.271 [2024-07-14 10:12:35.716145] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1a10) on tqpair=0x12a7d70 00:20:05.271 [2024-07-14 10:12:35.716156] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.271 [2024-07-14 10:12:35.716160] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.271 [2024-07-14 10:12:35.716164] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.271 [2024-07-14 10:12:35.716171] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.271 [2024-07-14 10:12:35.716189] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.271 [2024-07-14 10:12:35.716262] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.271 [2024-07-14 10:12:35.716271] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.271 [2024-07-14 10:12:35.716275] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.271 [2024-07-14 10:12:35.716279] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1a10) on tqpair=0x12a7d70 00:20:05.272 [2024-07-14 10:12:35.716291] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.272 [2024-07-14 10:12:35.716296] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.272 [2024-07-14 10:12:35.716300] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.272 [2024-07-14 10:12:35.716308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.272 [2024-07-14 10:12:35.716329] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.272 [2024-07-14 10:12:35.716389] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.272 [2024-07-14 10:12:35.716396] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.272 [2024-07-14 10:12:35.716400] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.272 [2024-07-14 10:12:35.716404] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1a10) on tqpair=0x12a7d70 00:20:05.272 [2024-07-14 10:12:35.716416] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.272 [2024-07-14 10:12:35.716421] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.272 [2024-07-14 10:12:35.716424] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.272 [2024-07-14 10:12:35.716432] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.272 [2024-07-14 10:12:35.716451] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.272 [2024-07-14 10:12:35.716511] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.272 [2024-07-14 10:12:35.716517] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.272 [2024-07-14 10:12:35.716521] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.272 [2024-07-14 10:12:35.716525] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1a10) on tqpair=0x12a7d70 00:20:05.272 [2024-07-14 10:12:35.716536] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.272 [2024-07-14 10:12:35.716541] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.272 [2024-07-14 10:12:35.716545] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.272 [2024-07-14 10:12:35.716552] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.272 [2024-07-14 10:12:35.716571] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.272 [2024-07-14 10:12:35.716637] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.272 [2024-07-14 10:12:35.716644] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.272 [2024-07-14 10:12:35.716648] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.272 [2024-07-14 10:12:35.716652] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1a10) on tqpair=0x12a7d70 00:20:05.272 [2024-07-14 10:12:35.716663] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.272 [2024-07-14 10:12:35.716667] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.272 [2024-07-14 10:12:35.716671] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.272 [2024-07-14 10:12:35.716678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.272 [2024-07-14 10:12:35.716696] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.272 [2024-07-14 10:12:35.716749] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.272 [2024-07-14 10:12:35.716755] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.272 [2024-07-14 10:12:35.716759] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.272 [2024-07-14 10:12:35.716763] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1a10) on tqpair=0x12a7d70 00:20:05.272 [2024-07-14 10:12:35.716774] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.272 [2024-07-14 10:12:35.716778] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.272 [2024-07-14 10:12:35.716782] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.272 [2024-07-14 10:12:35.716789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.272 [2024-07-14 10:12:35.716807] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.272 [2024-07-14 10:12:35.716859] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.272 [2024-07-14 10:12:35.716866] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.272 [2024-07-14 10:12:35.716870] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.272 [2024-07-14 10:12:35.716874] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1a10) on tqpair=0x12a7d70 00:20:05.272 [2024-07-14 10:12:35.716885] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.272 [2024-07-14 10:12:35.716889] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.272 [2024-07-14 10:12:35.716893] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.272 [2024-07-14 10:12:35.716900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.272 [2024-07-14 10:12:35.716918] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.272 [2024-07-14 10:12:35.716974] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.272 [2024-07-14 10:12:35.716980] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.272 [2024-07-14 10:12:35.716984] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.272 [2024-07-14 10:12:35.716988] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1a10) on tqpair=0x12a7d70 00:20:05.272 [2024-07-14 10:12:35.716999] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.272 [2024-07-14 10:12:35.717003] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.272 [2024-07-14 10:12:35.717007] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.272 [2024-07-14 10:12:35.717014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.272 [2024-07-14 10:12:35.717032] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.272 [2024-07-14 10:12:35.717085] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.272 [2024-07-14 10:12:35.717091] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.272 [2024-07-14 10:12:35.717095] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.272 [2024-07-14 10:12:35.717099] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1a10) on tqpair=0x12a7d70 00:20:05.272 [2024-07-14 10:12:35.717110] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.272 [2024-07-14 10:12:35.717114] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.272 [2024-07-14 10:12:35.717118] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.272 [2024-07-14 10:12:35.717125] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.272 [2024-07-14 10:12:35.717143] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.272 [2024-07-14 10:12:35.717193] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.272 [2024-07-14 10:12:35.717199] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.272 [2024-07-14 10:12:35.717203] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.272 [2024-07-14 10:12:35.717207] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1a10) on tqpair=0x12a7d70 00:20:05.272 [2024-07-14 10:12:35.717227] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.272 [2024-07-14 10:12:35.717233] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.272 [2024-07-14 10:12:35.717237] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.272 [2024-07-14 10:12:35.717244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.272 [2024-07-14 10:12:35.717265] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.272 [2024-07-14 10:12:35.717321] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.272 [2024-07-14 10:12:35.717328] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.272 [2024-07-14 10:12:35.717331] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.272 [2024-07-14 10:12:35.717335] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1a10) on tqpair=0x12a7d70 00:20:05.272 [2024-07-14 10:12:35.717346] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.272 [2024-07-14 10:12:35.717351] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.272 [2024-07-14 10:12:35.717355] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.272 [2024-07-14 10:12:35.717362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.272 [2024-07-14 10:12:35.717381] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.273 [2024-07-14 10:12:35.717433] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.273 [2024-07-14 10:12:35.717439] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.273 [2024-07-14 10:12:35.717443] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.717447] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1a10) on tqpair=0x12a7d70 00:20:05.273 [2024-07-14 10:12:35.717458] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.717462] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.717466] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.273 [2024-07-14 10:12:35.717473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.273 [2024-07-14 10:12:35.717492] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.273 [2024-07-14 10:12:35.717542] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.273 [2024-07-14 10:12:35.717548] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.273 [2024-07-14 10:12:35.717552] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.717556] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1a10) on tqpair=0x12a7d70 00:20:05.273 [2024-07-14 10:12:35.717567] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.717571] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.717575] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.273 [2024-07-14 10:12:35.717582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.273 [2024-07-14 10:12:35.717600] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.273 [2024-07-14 10:12:35.717653] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.273 [2024-07-14 10:12:35.717660] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.273 [2024-07-14 10:12:35.717664] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.717668] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1a10) on tqpair=0x12a7d70 00:20:05.273 [2024-07-14 10:12:35.717679] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.717683] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.717687] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.273 [2024-07-14 10:12:35.717694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.273 [2024-07-14 10:12:35.717712] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.273 [2024-07-14 10:12:35.717765] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.273 [2024-07-14 10:12:35.717771] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.273 [2024-07-14 10:12:35.717775] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.717779] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1a10) on tqpair=0x12a7d70 00:20:05.273 [2024-07-14 10:12:35.717790] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.717794] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.717798] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.273 [2024-07-14 10:12:35.717806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.273 [2024-07-14 10:12:35.717824] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.273 [2024-07-14 10:12:35.717874] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.273 [2024-07-14 10:12:35.717881] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.273 [2024-07-14 10:12:35.717884] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.717888] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1a10) on tqpair=0x12a7d70 00:20:05.273 [2024-07-14 10:12:35.717900] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.717904] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.717908] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.273 [2024-07-14 10:12:35.717915] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.273 [2024-07-14 10:12:35.717933] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.273 [2024-07-14 10:12:35.717985] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.273 [2024-07-14 10:12:35.717992] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.273 [2024-07-14 10:12:35.717995] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.718000] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1a10) on tqpair=0x12a7d70 00:20:05.273 [2024-07-14 10:12:35.718011] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.718015] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.718019] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.273 [2024-07-14 10:12:35.718026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.273 [2024-07-14 10:12:35.718044] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.273 [2024-07-14 10:12:35.718093] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.273 [2024-07-14 10:12:35.718100] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.273 [2024-07-14 10:12:35.718104] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.718108] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1a10) on tqpair=0x12a7d70 00:20:05.273 [2024-07-14 10:12:35.718119] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.718123] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.718127] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.273 [2024-07-14 10:12:35.718134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.273 [2024-07-14 10:12:35.718152] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.273 [2024-07-14 10:12:35.718211] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.273 [2024-07-14 10:12:35.718220] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.273 [2024-07-14 10:12:35.718223] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.718227] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1a10) on tqpair=0x12a7d70 00:20:05.273 [2024-07-14 10:12:35.718256] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.718260] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.718264] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.273 [2024-07-14 10:12:35.718272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.273 [2024-07-14 10:12:35.718293] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.273 [2024-07-14 10:12:35.718348] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.273 [2024-07-14 10:12:35.718355] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.273 [2024-07-14 10:12:35.718359] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.718364] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1a10) on tqpair=0x12a7d70 00:20:05.273 [2024-07-14 10:12:35.718375] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.718380] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.718384] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.273 [2024-07-14 10:12:35.718391] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.273 [2024-07-14 10:12:35.718410] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.273 [2024-07-14 10:12:35.718467] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.273 [2024-07-14 10:12:35.718479] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.273 [2024-07-14 10:12:35.718483] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.718488] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1a10) on tqpair=0x12a7d70 00:20:05.273 [2024-07-14 10:12:35.718500] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.718505] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.718509] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.273 [2024-07-14 10:12:35.718517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.273 [2024-07-14 10:12:35.718537] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.273 [2024-07-14 10:12:35.718592] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.273 [2024-07-14 10:12:35.718598] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.273 [2024-07-14 10:12:35.718602] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.718606] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1a10) on tqpair=0x12a7d70 00:20:05.273 [2024-07-14 10:12:35.718632] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.718637] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.718641] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.273 [2024-07-14 10:12:35.718648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.273 [2024-07-14 10:12:35.718667] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.273 [2024-07-14 10:12:35.718717] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.273 [2024-07-14 10:12:35.718724] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.273 [2024-07-14 10:12:35.718728] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.718732] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1a10) on tqpair=0x12a7d70 00:20:05.273 [2024-07-14 10:12:35.718743] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.718747] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.718751] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.273 [2024-07-14 10:12:35.718758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.273 [2024-07-14 10:12:35.718777] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.273 [2024-07-14 10:12:35.718843] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.273 [2024-07-14 10:12:35.718851] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.273 [2024-07-14 10:12:35.718854] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.718858] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1a10) on tqpair=0x12a7d70 00:20:05.273 [2024-07-14 10:12:35.718870] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.718874] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.273 [2024-07-14 10:12:35.718878] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.274 [2024-07-14 10:12:35.718885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.274 [2024-07-14 10:12:35.718905] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.274 [2024-07-14 10:12:35.718958] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.274 [2024-07-14 10:12:35.718964] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.274 [2024-07-14 10:12:35.718968] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.274 [2024-07-14 10:12:35.718972] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1a10) on tqpair=0x12a7d70 00:20:05.274 [2024-07-14 10:12:35.718983] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.274 [2024-07-14 10:12:35.718988] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.274 [2024-07-14 10:12:35.718991] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.274 [2024-07-14 10:12:35.718999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.274 [2024-07-14 10:12:35.719017] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.274 [2024-07-14 10:12:35.719070] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.274 [2024-07-14 10:12:35.719077] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.274 [2024-07-14 10:12:35.719080] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.274 [2024-07-14 10:12:35.719084] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1a10) on tqpair=0x12a7d70 00:20:05.274 [2024-07-14 10:12:35.719095] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.274 [2024-07-14 10:12:35.719100] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.274 [2024-07-14 10:12:35.719104] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.274 [2024-07-14 10:12:35.719111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.274 [2024-07-14 10:12:35.719129] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.274 [2024-07-14 10:12:35.719179] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.274 [2024-07-14 10:12:35.719186] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.274 [2024-07-14 10:12:35.719190] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.274 [2024-07-14 10:12:35.719194] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1a10) on tqpair=0x12a7d70 00:20:05.274 [2024-07-14 10:12:35.723271] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.274 [2024-07-14 10:12:35.723290] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.274 [2024-07-14 10:12:35.723295] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x12a7d70) 00:20:05.274 [2024-07-14 10:12:35.723320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.274 [2024-07-14 10:12:35.723348] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12f1a10, cid 3, qid 0 00:20:05.274 [2024-07-14 10:12:35.723412] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.274 [2024-07-14 10:12:35.723420] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.274 [2024-07-14 10:12:35.723424] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.274 [2024-07-14 10:12:35.723428] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x12f1a10) on tqpair=0x12a7d70 00:20:05.274 [2024-07-14 10:12:35.723438] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:20:05.274 00:20:05.274 10:12:35 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:05.274 [2024-07-14 10:12:35.761290] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:05.274 [2024-07-14 10:12:35.761343] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92908 ] 00:20:05.536 [2024-07-14 10:12:35.901330] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:05.536 [2024-07-14 10:12:35.901417] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:05.536 [2024-07-14 10:12:35.901424] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:05.536 [2024-07-14 10:12:35.901438] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:05.536 [2024-07-14 10:12:35.901448] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:05.536 [2024-07-14 10:12:35.901604] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:05.536 [2024-07-14 10:12:35.901673] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c49d70 0 00:20:05.536 [2024-07-14 10:12:35.907249] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:05.536 [2024-07-14 10:12:35.907272] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:05.536 [2024-07-14 10:12:35.907293] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:05.536 [2024-07-14 10:12:35.907297] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:05.536 [2024-07-14 10:12:35.907341] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.536 [2024-07-14 10:12:35.907348] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.536 [2024-07-14 10:12:35.907352] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c49d70) 00:20:05.536 [2024-07-14 10:12:35.907365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:05.536 [2024-07-14 10:12:35.907395] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c935f0, cid 0, qid 0 00:20:05.536 [2024-07-14 10:12:35.915250] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.536 [2024-07-14 10:12:35.915271] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.536 [2024-07-14 10:12:35.915293] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.536 [2024-07-14 10:12:35.915297] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c935f0) on tqpair=0x1c49d70 00:20:05.536 [2024-07-14 10:12:35.915308] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:05.536 [2024-07-14 10:12:35.915315] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:05.536 [2024-07-14 10:12:35.915322] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:05.536 [2024-07-14 10:12:35.915337] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.536 [2024-07-14 10:12:35.915342] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.536 [2024-07-14 10:12:35.915345] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c49d70) 00:20:05.536 [2024-07-14 10:12:35.915354] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.536 [2024-07-14 10:12:35.915381] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c935f0, cid 0, qid 0 00:20:05.536 [2024-07-14 10:12:35.915446] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.536 [2024-07-14 10:12:35.915452] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.536 [2024-07-14 10:12:35.915456] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.536 [2024-07-14 10:12:35.915459] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c935f0) on tqpair=0x1c49d70 00:20:05.536 [2024-07-14 10:12:35.915466] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:05.536 [2024-07-14 10:12:35.915473] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:05.536 [2024-07-14 10:12:35.915480] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.536 [2024-07-14 10:12:35.915484] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.536 [2024-07-14 10:12:35.915487] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c49d70) 00:20:05.536 [2024-07-14 10:12:35.915494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.536 [2024-07-14 10:12:35.915528] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c935f0, cid 0, qid 0 00:20:05.536 [2024-07-14 10:12:35.915588] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.536 [2024-07-14 10:12:35.915595] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.536 [2024-07-14 10:12:35.915598] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.536 [2024-07-14 10:12:35.915602] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c935f0) on tqpair=0x1c49d70 00:20:05.536 [2024-07-14 10:12:35.915609] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:05.536 [2024-07-14 10:12:35.915618] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:05.536 [2024-07-14 10:12:35.915625] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.536 [2024-07-14 10:12:35.915628] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.536 [2024-07-14 10:12:35.915632] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c49d70) 00:20:05.536 [2024-07-14 10:12:35.915639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.536 [2024-07-14 10:12:35.915657] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c935f0, cid 0, qid 0 00:20:05.536 [2024-07-14 10:12:35.915708] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.536 [2024-07-14 10:12:35.915715] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.536 [2024-07-14 10:12:35.915718] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.536 [2024-07-14 10:12:35.915722] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c935f0) on tqpair=0x1c49d70 00:20:05.536 [2024-07-14 10:12:35.915729] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:05.536 [2024-07-14 10:12:35.915738] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.536 [2024-07-14 10:12:35.915743] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.536 [2024-07-14 10:12:35.915746] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c49d70) 00:20:05.536 [2024-07-14 10:12:35.915753] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.536 [2024-07-14 10:12:35.915771] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c935f0, cid 0, qid 0 00:20:05.536 [2024-07-14 10:12:35.915836] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.536 [2024-07-14 10:12:35.915842] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.536 [2024-07-14 10:12:35.915846] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.536 [2024-07-14 10:12:35.915850] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c935f0) on tqpair=0x1c49d70 00:20:05.536 [2024-07-14 10:12:35.915856] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:05.536 [2024-07-14 10:12:35.915861] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:05.536 [2024-07-14 10:12:35.915869] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:05.536 [2024-07-14 10:12:35.915974] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:05.537 [2024-07-14 10:12:35.915979] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:05.537 [2024-07-14 10:12:35.915988] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.537 [2024-07-14 10:12:35.915992] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.537 [2024-07-14 10:12:35.915995] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c49d70) 00:20:05.537 [2024-07-14 10:12:35.916003] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.537 [2024-07-14 10:12:35.916021] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c935f0, cid 0, qid 0 00:20:05.537 [2024-07-14 10:12:35.916080] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.537 [2024-07-14 10:12:35.916087] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.537 [2024-07-14 10:12:35.916091] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.537 [2024-07-14 10:12:35.916095] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c935f0) on tqpair=0x1c49d70 00:20:05.537 [2024-07-14 10:12:35.916101] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:05.537 [2024-07-14 10:12:35.916111] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.537 [2024-07-14 10:12:35.916116] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.537 [2024-07-14 10:12:35.916119] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c49d70) 00:20:05.537 [2024-07-14 10:12:35.916126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.537 [2024-07-14 10:12:35.916144] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c935f0, cid 0, qid 0 00:20:05.537 [2024-07-14 10:12:35.916197] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.537 [2024-07-14 10:12:35.916204] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.537 [2024-07-14 10:12:35.916224] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.537 [2024-07-14 10:12:35.916228] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c935f0) on tqpair=0x1c49d70 00:20:05.537 [2024-07-14 10:12:35.916234] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:05.537 [2024-07-14 10:12:35.916239] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:05.537 [2024-07-14 10:12:35.916248] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:05.537 [2024-07-14 10:12:35.916275] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:05.537 [2024-07-14 10:12:35.916287] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.537 [2024-07-14 10:12:35.916291] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.537 [2024-07-14 10:12:35.916295] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c49d70) 00:20:05.537 [2024-07-14 10:12:35.916303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.537 [2024-07-14 10:12:35.916326] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c935f0, cid 0, qid 0 00:20:05.537 [2024-07-14 10:12:35.916432] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:05.537 [2024-07-14 10:12:35.916440] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:05.537 [2024-07-14 10:12:35.916444] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:05.537 [2024-07-14 10:12:35.916448] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c49d70): datao=0, datal=4096, cccid=0 00:20:05.537 [2024-07-14 10:12:35.916453] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c935f0) on tqpair(0x1c49d70): expected_datao=0, payload_size=4096 00:20:05.537 [2024-07-14 10:12:35.916462] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:05.537 [2024-07-14 10:12:35.916467] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:05.537 [2024-07-14 10:12:35.916476] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.537 [2024-07-14 10:12:35.916482] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.537 [2024-07-14 10:12:35.916486] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.537 [2024-07-14 10:12:35.916490] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c935f0) on tqpair=0x1c49d70 00:20:05.537 [2024-07-14 10:12:35.916500] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:05.537 [2024-07-14 10:12:35.916505] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:05.537 [2024-07-14 10:12:35.916510] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:05.537 [2024-07-14 10:12:35.916515] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:05.537 [2024-07-14 10:12:35.916520] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:05.537 [2024-07-14 10:12:35.916525] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:05.537 [2024-07-14 10:12:35.916539] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:05.537 [2024-07-14 10:12:35.916547] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.537 [2024-07-14 10:12:35.916552] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.537 [2024-07-14 10:12:35.916556] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c49d70) 00:20:05.537 [2024-07-14 10:12:35.916563] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:05.537 [2024-07-14 10:12:35.916585] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c935f0, cid 0, qid 0 00:20:05.537 [2024-07-14 10:12:35.916655] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.537 [2024-07-14 10:12:35.916662] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.537 [2024-07-14 10:12:35.916665] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.537 [2024-07-14 10:12:35.916669] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c935f0) on tqpair=0x1c49d70 00:20:05.537 [2024-07-14 10:12:35.916678] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.537 [2024-07-14 10:12:35.916682] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.537 [2024-07-14 10:12:35.916685] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c49d70) 00:20:05.537 [2024-07-14 10:12:35.916692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.537 [2024-07-14 10:12:35.916699] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.537 [2024-07-14 10:12:35.916702] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.537 [2024-07-14 10:12:35.916706] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c49d70) 00:20:05.537 [2024-07-14 10:12:35.916712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.537 [2024-07-14 10:12:35.916718] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.537 [2024-07-14 10:12:35.916722] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.537 [2024-07-14 10:12:35.916725] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c49d70) 00:20:05.537 [2024-07-14 10:12:35.916731] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.537 [2024-07-14 10:12:35.916737] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.537 [2024-07-14 10:12:35.916741] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.537 [2024-07-14 10:12:35.916745] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.537 [2024-07-14 10:12:35.916750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.537 [2024-07-14 10:12:35.916755] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:05.537 [2024-07-14 10:12:35.916769] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:05.537 [2024-07-14 10:12:35.916776] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.537 [2024-07-14 10:12:35.916780] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.537 [2024-07-14 10:12:35.916783] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c49d70) 00:20:05.537 [2024-07-14 10:12:35.916791] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.537 [2024-07-14 10:12:35.916811] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c935f0, cid 0, qid 0 00:20:05.537 [2024-07-14 10:12:35.916819] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93750, cid 1, qid 0 00:20:05.537 [2024-07-14 10:12:35.916824] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c938b0, cid 2, qid 0 00:20:05.537 [2024-07-14 10:12:35.916828] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.537 [2024-07-14 10:12:35.916833] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93b70, cid 4, qid 0 00:20:05.537 [2024-07-14 10:12:35.916928] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.537 [2024-07-14 10:12:35.916935] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.537 [2024-07-14 10:12:35.916938] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.537 [2024-07-14 10:12:35.916942] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93b70) on tqpair=0x1c49d70 00:20:05.537 [2024-07-14 10:12:35.916949] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:05.537 [2024-07-14 10:12:35.916954] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:05.537 [2024-07-14 10:12:35.916963] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:05.537 [2024-07-14 10:12:35.916973] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:05.537 [2024-07-14 10:12:35.916980] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.537 [2024-07-14 10:12:35.916984] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.537 [2024-07-14 10:12:35.916988] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c49d70) 00:20:05.537 [2024-07-14 10:12:35.916995] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:05.537 [2024-07-14 10:12:35.917014] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93b70, cid 4, qid 0 00:20:05.537 [2024-07-14 10:12:35.917073] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.537 [2024-07-14 10:12:35.917080] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.537 [2024-07-14 10:12:35.917083] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.537 [2024-07-14 10:12:35.917087] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93b70) on tqpair=0x1c49d70 00:20:05.537 [2024-07-14 10:12:35.917147] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:05.537 [2024-07-14 10:12:35.917158] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:05.537 [2024-07-14 10:12:35.917165] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.537 [2024-07-14 10:12:35.917169] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.537 [2024-07-14 10:12:35.917173] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c49d70) 00:20:05.538 [2024-07-14 10:12:35.917180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.538 [2024-07-14 10:12:35.917199] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93b70, cid 4, qid 0 00:20:05.538 [2024-07-14 10:12:35.917278] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:05.538 [2024-07-14 10:12:35.917287] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:05.538 [2024-07-14 10:12:35.917291] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.917295] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c49d70): datao=0, datal=4096, cccid=4 00:20:05.538 [2024-07-14 10:12:35.917299] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c93b70) on tqpair(0x1c49d70): expected_datao=0, payload_size=4096 00:20:05.538 [2024-07-14 10:12:35.917308] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.917312] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.917320] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.538 [2024-07-14 10:12:35.917326] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.538 [2024-07-14 10:12:35.917330] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.917334] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93b70) on tqpair=0x1c49d70 00:20:05.538 [2024-07-14 10:12:35.917350] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:05.538 [2024-07-14 10:12:35.917360] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:05.538 [2024-07-14 10:12:35.917370] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:05.538 [2024-07-14 10:12:35.917378] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.917382] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.917386] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c49d70) 00:20:05.538 [2024-07-14 10:12:35.917393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.538 [2024-07-14 10:12:35.917415] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93b70, cid 4, qid 0 00:20:05.538 [2024-07-14 10:12:35.917495] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:05.538 [2024-07-14 10:12:35.917502] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:05.538 [2024-07-14 10:12:35.917505] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.917509] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c49d70): datao=0, datal=4096, cccid=4 00:20:05.538 [2024-07-14 10:12:35.917514] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c93b70) on tqpair(0x1c49d70): expected_datao=0, payload_size=4096 00:20:05.538 [2024-07-14 10:12:35.917522] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.917526] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.917534] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.538 [2024-07-14 10:12:35.917540] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.538 [2024-07-14 10:12:35.917543] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.917547] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93b70) on tqpair=0x1c49d70 00:20:05.538 [2024-07-14 10:12:35.917563] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:05.538 [2024-07-14 10:12:35.917574] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:05.538 [2024-07-14 10:12:35.917582] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.917586] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.917590] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c49d70) 00:20:05.538 [2024-07-14 10:12:35.917597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.538 [2024-07-14 10:12:35.917617] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93b70, cid 4, qid 0 00:20:05.538 [2024-07-14 10:12:35.917682] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:05.538 [2024-07-14 10:12:35.917688] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:05.538 [2024-07-14 10:12:35.917692] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.917696] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c49d70): datao=0, datal=4096, cccid=4 00:20:05.538 [2024-07-14 10:12:35.917700] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c93b70) on tqpair(0x1c49d70): expected_datao=0, payload_size=4096 00:20:05.538 [2024-07-14 10:12:35.917708] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.917712] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.917720] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.538 [2024-07-14 10:12:35.917726] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.538 [2024-07-14 10:12:35.917729] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.917733] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93b70) on tqpair=0x1c49d70 00:20:05.538 [2024-07-14 10:12:35.917743] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:05.538 [2024-07-14 10:12:35.917751] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:05.538 [2024-07-14 10:12:35.917762] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:05.538 [2024-07-14 10:12:35.917769] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:05.538 [2024-07-14 10:12:35.917774] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:05.538 [2024-07-14 10:12:35.917779] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:05.538 [2024-07-14 10:12:35.917784] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:05.538 [2024-07-14 10:12:35.917789] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:05.538 [2024-07-14 10:12:35.917806] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.917811] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.917815] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c49d70) 00:20:05.538 [2024-07-14 10:12:35.917822] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.538 [2024-07-14 10:12:35.917829] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.917833] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.917836] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c49d70) 00:20:05.538 [2024-07-14 10:12:35.917842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:05.538 [2024-07-14 10:12:35.917867] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93b70, cid 4, qid 0 00:20:05.538 [2024-07-14 10:12:35.917875] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93cd0, cid 5, qid 0 00:20:05.538 [2024-07-14 10:12:35.917945] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.538 [2024-07-14 10:12:35.917951] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.538 [2024-07-14 10:12:35.917955] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.917959] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93b70) on tqpair=0x1c49d70 00:20:05.538 [2024-07-14 10:12:35.917967] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.538 [2024-07-14 10:12:35.917973] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.538 [2024-07-14 10:12:35.917976] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.917980] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93cd0) on tqpair=0x1c49d70 00:20:05.538 [2024-07-14 10:12:35.917991] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.917995] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.917999] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c49d70) 00:20:05.538 [2024-07-14 10:12:35.918005] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.538 [2024-07-14 10:12:35.918024] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93cd0, cid 5, qid 0 00:20:05.538 [2024-07-14 10:12:35.918083] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.538 [2024-07-14 10:12:35.918090] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.538 [2024-07-14 10:12:35.918094] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.918097] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93cd0) on tqpair=0x1c49d70 00:20:05.538 [2024-07-14 10:12:35.918108] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.918113] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.918117] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c49d70) 00:20:05.538 [2024-07-14 10:12:35.918123] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.538 [2024-07-14 10:12:35.918141] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93cd0, cid 5, qid 0 00:20:05.538 [2024-07-14 10:12:35.918211] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.538 [2024-07-14 10:12:35.918220] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.538 [2024-07-14 10:12:35.918223] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.918227] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93cd0) on tqpair=0x1c49d70 00:20:05.538 [2024-07-14 10:12:35.918256] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.918261] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.918265] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c49d70) 00:20:05.538 [2024-07-14 10:12:35.918272] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.538 [2024-07-14 10:12:35.918293] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93cd0, cid 5, qid 0 00:20:05.538 [2024-07-14 10:12:35.918349] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.538 [2024-07-14 10:12:35.918356] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.538 [2024-07-14 10:12:35.918360] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.918364] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93cd0) on tqpair=0x1c49d70 00:20:05.538 [2024-07-14 10:12:35.918378] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.918383] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.538 [2024-07-14 10:12:35.918387] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c49d70) 00:20:05.538 [2024-07-14 10:12:35.918395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.538 [2024-07-14 10:12:35.918402] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.539 [2024-07-14 10:12:35.918406] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.539 [2024-07-14 10:12:35.918410] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c49d70) 00:20:05.539 [2024-07-14 10:12:35.918417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.539 [2024-07-14 10:12:35.918424] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.539 [2024-07-14 10:12:35.918428] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.539 [2024-07-14 10:12:35.918432] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1c49d70) 00:20:05.539 [2024-07-14 10:12:35.918438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.539 [2024-07-14 10:12:35.918446] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.539 [2024-07-14 10:12:35.918450] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.539 [2024-07-14 10:12:35.918454] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c49d70) 00:20:05.539 [2024-07-14 10:12:35.918461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.539 [2024-07-14 10:12:35.918481] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93cd0, cid 5, qid 0 00:20:05.539 [2024-07-14 10:12:35.918489] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93b70, cid 4, qid 0 00:20:05.539 [2024-07-14 10:12:35.918494] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93e30, cid 6, qid 0 00:20:05.539 [2024-07-14 10:12:35.918499] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93f90, cid 7, qid 0 00:20:05.539 [2024-07-14 10:12:35.918658] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:05.539 [2024-07-14 10:12:35.918665] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:05.539 [2024-07-14 10:12:35.918668] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:05.539 [2024-07-14 10:12:35.918672] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c49d70): datao=0, datal=8192, cccid=5 00:20:05.539 [2024-07-14 10:12:35.918677] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c93cd0) on tqpair(0x1c49d70): expected_datao=0, payload_size=8192 00:20:05.539 [2024-07-14 10:12:35.918694] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:05.539 [2024-07-14 10:12:35.918699] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:05.539 [2024-07-14 10:12:35.918705] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:05.539 [2024-07-14 10:12:35.918711] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:05.539 [2024-07-14 10:12:35.918714] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:05.539 [2024-07-14 10:12:35.918718] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c49d70): datao=0, datal=512, cccid=4 00:20:05.539 [2024-07-14 10:12:35.918722] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c93b70) on tqpair(0x1c49d70): expected_datao=0, payload_size=512 00:20:05.539 [2024-07-14 10:12:35.918729] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:05.539 [2024-07-14 10:12:35.918733] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:05.539 [2024-07-14 10:12:35.918739] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:05.539 [2024-07-14 10:12:35.918744] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:05.539 [2024-07-14 10:12:35.918748] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:05.539 [2024-07-14 10:12:35.918751] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c49d70): datao=0, datal=512, cccid=6 00:20:05.539 [2024-07-14 10:12:35.918756] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c93e30) on tqpair(0x1c49d70): expected_datao=0, payload_size=512 00:20:05.539 [2024-07-14 10:12:35.918763] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:05.539 [2024-07-14 10:12:35.918767] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:05.539 [2024-07-14 10:12:35.918772] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:05.539 [2024-07-14 10:12:35.918778] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:05.539 [2024-07-14 10:12:35.918781] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:05.539 [2024-07-14 10:12:35.918794] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c49d70): datao=0, datal=4096, cccid=7 00:20:05.539 [2024-07-14 10:12:35.918816] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1c93f90) on tqpair(0x1c49d70): expected_datao=0, payload_size=4096 00:20:05.539 [2024-07-14 10:12:35.918824] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:05.539 [2024-07-14 10:12:35.918828] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:05.539 [2024-07-14 10:12:35.918836] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.539 [2024-07-14 10:12:35.918842] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.539 [2024-07-14 10:12:35.918846] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.539 [2024-07-14 10:12:35.918850] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93cd0) on tqpair=0x1c49d70 00:20:05.539 [2024-07-14 10:12:35.918870] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.539 [2024-07-14 10:12:35.918877] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.539 [2024-07-14 10:12:35.918880] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.539 [2024-07-14 10:12:35.918885] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93b70) on tqpair=0x1c49d70 00:20:05.539 [2024-07-14 10:12:35.918896] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.539 [2024-07-14 10:12:35.918902] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.539 [2024-07-14 10:12:35.918906] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.539 ===================================================== 00:20:05.539 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:05.539 ===================================================== 00:20:05.539 Controller Capabilities/Features 00:20:05.539 ================================ 00:20:05.539 Vendor ID: 8086 00:20:05.539 Subsystem Vendor ID: 8086 00:20:05.539 Serial Number: SPDK00000000000001 00:20:05.539 Model Number: SPDK bdev Controller 00:20:05.539 Firmware Version: 24.01.1 00:20:05.539 Recommended Arb Burst: 6 00:20:05.539 IEEE OUI Identifier: e4 d2 5c 00:20:05.539 Multi-path I/O 00:20:05.539 May have multiple subsystem ports: Yes 00:20:05.539 May have multiple controllers: Yes 00:20:05.539 Associated with SR-IOV VF: No 00:20:05.539 Max Data Transfer Size: 131072 00:20:05.539 Max Number of Namespaces: 32 00:20:05.539 Max Number of I/O Queues: 127 00:20:05.539 NVMe Specification Version (VS): 1.3 00:20:05.539 NVMe Specification Version (Identify): 1.3 00:20:05.539 Maximum Queue Entries: 128 00:20:05.539 Contiguous Queues Required: Yes 00:20:05.539 Arbitration Mechanisms Supported 00:20:05.539 Weighted Round Robin: Not Supported 00:20:05.539 Vendor Specific: Not Supported 00:20:05.539 Reset Timeout: 15000 ms 00:20:05.539 Doorbell Stride: 4 bytes 00:20:05.539 NVM Subsystem Reset: Not Supported 00:20:05.539 Command Sets Supported 00:20:05.539 NVM Command Set: Supported 00:20:05.539 Boot Partition: Not Supported 00:20:05.539 Memory Page Size Minimum: 4096 bytes 00:20:05.539 Memory Page Size Maximum: 4096 bytes 00:20:05.539 Persistent Memory Region: Not Supported 00:20:05.539 Optional Asynchronous Events Supported 00:20:05.539 Namespace Attribute Notices: Supported 00:20:05.539 Firmware Activation Notices: Not Supported 00:20:05.539 ANA Change Notices: Not Supported 00:20:05.539 PLE Aggregate Log Change Notices: Not Supported 00:20:05.539 LBA Status Info Alert Notices: Not Supported 00:20:05.539 EGE Aggregate Log Change Notices: Not Supported 00:20:05.539 Normal NVM Subsystem Shutdown event: Not Supported 00:20:05.539 Zone Descriptor Change Notices: Not Supported 00:20:05.539 Discovery Log Change Notices: Not Supported 00:20:05.539 Controller Attributes 00:20:05.539 128-bit Host Identifier: Supported 00:20:05.539 Non-Operational Permissive Mode: Not Supported 00:20:05.539 NVM Sets: Not Supported 00:20:05.539 Read Recovery Levels: Not Supported 00:20:05.539 Endurance Groups: Not Supported 00:20:05.539 Predictable Latency Mode: Not Supported 00:20:05.539 Traffic Based Keep ALive: Not Supported 00:20:05.539 Namespace Granularity: Not Supported 00:20:05.539 SQ Associations: Not Supported 00:20:05.539 UUID List: Not Supported 00:20:05.539 Multi-Domain Subsystem: Not Supported 00:20:05.539 Fixed Capacity Management: Not Supported 00:20:05.539 Variable Capacity Management: Not Supported 00:20:05.539 Delete Endurance Group: Not Supported 00:20:05.539 Delete NVM Set: Not Supported 00:20:05.539 Extended LBA Formats Supported: Not Supported 00:20:05.539 Flexible Data Placement Supported: Not Supported 00:20:05.539 00:20:05.539 Controller Memory Buffer Support 00:20:05.539 ================================ 00:20:05.539 Supported: No 00:20:05.539 00:20:05.539 Persistent Memory Region Support 00:20:05.539 ================================ 00:20:05.539 Supported: No 00:20:05.539 00:20:05.539 Admin Command Set Attributes 00:20:05.539 ============================ 00:20:05.539 Security Send/Receive: Not Supported 00:20:05.539 Format NVM: Not Supported 00:20:05.539 Firmware Activate/Download: Not Supported 00:20:05.539 Namespace Management: Not Supported 00:20:05.539 Device Self-Test: Not Supported 00:20:05.539 Directives: Not Supported 00:20:05.539 NVMe-MI: Not Supported 00:20:05.539 Virtualization Management: Not Supported 00:20:05.539 Doorbell Buffer Config: Not Supported 00:20:05.539 Get LBA Status Capability: Not Supported 00:20:05.539 Command & Feature Lockdown Capability: Not Supported 00:20:05.539 Abort Command Limit: 4 00:20:05.539 Async Event Request Limit: 4 00:20:05.539 Number of Firmware Slots: N/A 00:20:05.539 Firmware Slot 1 Read-Only: N/A 00:20:05.539 Firmware Activation Without Reset: [2024-07-14 10:12:35.918910] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93e30) on tqpair=0x1c49d70 00:20:05.539 [2024-07-14 10:12:35.918918] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.539 [2024-07-14 10:12:35.918924] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.539 [2024-07-14 10:12:35.918928] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.539 [2024-07-14 10:12:35.918932] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93f90) on tqpair=0x1c49d70 00:20:05.539 N/A 00:20:05.539 Multiple Update Detection Support: N/A 00:20:05.539 Firmware Update Granularity: No Information Provided 00:20:05.539 Per-Namespace SMART Log: No 00:20:05.539 Asymmetric Namespace Access Log Page: Not Supported 00:20:05.539 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:05.539 Command Effects Log Page: Supported 00:20:05.539 Get Log Page Extended Data: Supported 00:20:05.540 Telemetry Log Pages: Not Supported 00:20:05.540 Persistent Event Log Pages: Not Supported 00:20:05.540 Supported Log Pages Log Page: May Support 00:20:05.540 Commands Supported & Effects Log Page: Not Supported 00:20:05.540 Feature Identifiers & Effects Log Page:May Support 00:20:05.540 NVMe-MI Commands & Effects Log Page: May Support 00:20:05.540 Data Area 4 for Telemetry Log: Not Supported 00:20:05.540 Error Log Page Entries Supported: 128 00:20:05.540 Keep Alive: Supported 00:20:05.540 Keep Alive Granularity: 10000 ms 00:20:05.540 00:20:05.540 NVM Command Set Attributes 00:20:05.540 ========================== 00:20:05.540 Submission Queue Entry Size 00:20:05.540 Max: 64 00:20:05.540 Min: 64 00:20:05.540 Completion Queue Entry Size 00:20:05.540 Max: 16 00:20:05.540 Min: 16 00:20:05.540 Number of Namespaces: 32 00:20:05.540 Compare Command: Supported 00:20:05.540 Write Uncorrectable Command: Not Supported 00:20:05.540 Dataset Management Command: Supported 00:20:05.540 Write Zeroes Command: Supported 00:20:05.540 Set Features Save Field: Not Supported 00:20:05.540 Reservations: Supported 00:20:05.540 Timestamp: Not Supported 00:20:05.540 Copy: Supported 00:20:05.540 Volatile Write Cache: Present 00:20:05.540 Atomic Write Unit (Normal): 1 00:20:05.540 Atomic Write Unit (PFail): 1 00:20:05.540 Atomic Compare & Write Unit: 1 00:20:05.540 Fused Compare & Write: Supported 00:20:05.540 Scatter-Gather List 00:20:05.540 SGL Command Set: Supported 00:20:05.540 SGL Keyed: Supported 00:20:05.540 SGL Bit Bucket Descriptor: Not Supported 00:20:05.540 SGL Metadata Pointer: Not Supported 00:20:05.540 Oversized SGL: Not Supported 00:20:05.540 SGL Metadata Address: Not Supported 00:20:05.540 SGL Offset: Supported 00:20:05.540 Transport SGL Data Block: Not Supported 00:20:05.540 Replay Protected Memory Block: Not Supported 00:20:05.540 00:20:05.540 Firmware Slot Information 00:20:05.540 ========================= 00:20:05.540 Active slot: 1 00:20:05.540 Slot 1 Firmware Revision: 24.01.1 00:20:05.540 00:20:05.540 00:20:05.540 Commands Supported and Effects 00:20:05.540 ============================== 00:20:05.540 Admin Commands 00:20:05.540 -------------- 00:20:05.540 Get Log Page (02h): Supported 00:20:05.540 Identify (06h): Supported 00:20:05.540 Abort (08h): Supported 00:20:05.540 Set Features (09h): Supported 00:20:05.540 Get Features (0Ah): Supported 00:20:05.540 Asynchronous Event Request (0Ch): Supported 00:20:05.540 Keep Alive (18h): Supported 00:20:05.540 I/O Commands 00:20:05.540 ------------ 00:20:05.540 Flush (00h): Supported LBA-Change 00:20:05.540 Write (01h): Supported LBA-Change 00:20:05.540 Read (02h): Supported 00:20:05.540 Compare (05h): Supported 00:20:05.540 Write Zeroes (08h): Supported LBA-Change 00:20:05.540 Dataset Management (09h): Supported LBA-Change 00:20:05.540 Copy (19h): Supported LBA-Change 00:20:05.540 Unknown (79h): Supported LBA-Change 00:20:05.540 Unknown (7Ah): Supported 00:20:05.540 00:20:05.540 Error Log 00:20:05.540 ========= 00:20:05.540 00:20:05.540 Arbitration 00:20:05.540 =========== 00:20:05.540 Arbitration Burst: 1 00:20:05.540 00:20:05.540 Power Management 00:20:05.540 ================ 00:20:05.540 Number of Power States: 1 00:20:05.540 Current Power State: Power State #0 00:20:05.540 Power State #0: 00:20:05.540 Max Power: 0.00 W 00:20:05.540 Non-Operational State: Operational 00:20:05.540 Entry Latency: Not Reported 00:20:05.540 Exit Latency: Not Reported 00:20:05.540 Relative Read Throughput: 0 00:20:05.540 Relative Read Latency: 0 00:20:05.540 Relative Write Throughput: 0 00:20:05.540 Relative Write Latency: 0 00:20:05.540 Idle Power: Not Reported 00:20:05.540 Active Power: Not Reported 00:20:05.540 Non-Operational Permissive Mode: Not Supported 00:20:05.540 00:20:05.540 Health Information 00:20:05.540 ================== 00:20:05.540 Critical Warnings: 00:20:05.540 Available Spare Space: OK 00:20:05.540 Temperature: OK 00:20:05.540 Device Reliability: OK 00:20:05.540 Read Only: No 00:20:05.540 Volatile Memory Backup: OK 00:20:05.540 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:05.540 Temperature Threshold: [2024-07-14 10:12:35.919047] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.540 [2024-07-14 10:12:35.919054] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.540 [2024-07-14 10:12:35.919058] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c49d70) 00:20:05.540 [2024-07-14 10:12:35.919066] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.540 [2024-07-14 10:12:35.919090] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93f90, cid 7, qid 0 00:20:05.540 [2024-07-14 10:12:35.919170] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.540 [2024-07-14 10:12:35.919177] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.540 [2024-07-14 10:12:35.919181] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.540 [2024-07-14 10:12:35.919185] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93f90) on tqpair=0x1c49d70 00:20:05.540 [2024-07-14 10:12:35.919221] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:05.540 [2024-07-14 10:12:35.923272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.540 [2024-07-14 10:12:35.923298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.540 [2024-07-14 10:12:35.923305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.540 [2024-07-14 10:12:35.923311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:05.540 [2024-07-14 10:12:35.923322] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.540 [2024-07-14 10:12:35.923326] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.540 [2024-07-14 10:12:35.923330] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.540 [2024-07-14 10:12:35.923338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.540 [2024-07-14 10:12:35.923366] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.540 [2024-07-14 10:12:35.923433] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.540 [2024-07-14 10:12:35.923440] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.540 [2024-07-14 10:12:35.923444] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.540 [2024-07-14 10:12:35.923448] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.540 [2024-07-14 10:12:35.923457] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.540 [2024-07-14 10:12:35.923461] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.540 [2024-07-14 10:12:35.923464] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.540 [2024-07-14 10:12:35.923471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.540 [2024-07-14 10:12:35.923509] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.540 [2024-07-14 10:12:35.923584] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.540 [2024-07-14 10:12:35.923590] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.540 [2024-07-14 10:12:35.923594] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.923598] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.541 [2024-07-14 10:12:35.923604] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:05.541 [2024-07-14 10:12:35.923609] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:05.541 [2024-07-14 10:12:35.923619] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.923623] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.923627] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.541 [2024-07-14 10:12:35.923634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.541 [2024-07-14 10:12:35.923653] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.541 [2024-07-14 10:12:35.923704] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.541 [2024-07-14 10:12:35.923711] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.541 [2024-07-14 10:12:35.923715] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.923719] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.541 [2024-07-14 10:12:35.923730] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.923735] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.923739] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.541 [2024-07-14 10:12:35.923746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.541 [2024-07-14 10:12:35.923764] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.541 [2024-07-14 10:12:35.923818] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.541 [2024-07-14 10:12:35.923825] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.541 [2024-07-14 10:12:35.923829] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.923833] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.541 [2024-07-14 10:12:35.923844] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.923848] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.923852] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.541 [2024-07-14 10:12:35.923859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.541 [2024-07-14 10:12:35.923877] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.541 [2024-07-14 10:12:35.923931] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.541 [2024-07-14 10:12:35.923937] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.541 [2024-07-14 10:12:35.923941] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.923945] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.541 [2024-07-14 10:12:35.923956] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.923960] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.923964] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.541 [2024-07-14 10:12:35.923971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.541 [2024-07-14 10:12:35.923990] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.541 [2024-07-14 10:12:35.924046] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.541 [2024-07-14 10:12:35.924053] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.541 [2024-07-14 10:12:35.924057] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.924060] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.541 [2024-07-14 10:12:35.924071] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.924076] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.924079] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.541 [2024-07-14 10:12:35.924087] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.541 [2024-07-14 10:12:35.924105] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.541 [2024-07-14 10:12:35.924159] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.541 [2024-07-14 10:12:35.924165] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.541 [2024-07-14 10:12:35.924169] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.924173] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.541 [2024-07-14 10:12:35.924184] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.924188] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.924192] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.541 [2024-07-14 10:12:35.924199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.541 [2024-07-14 10:12:35.924229] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.541 [2024-07-14 10:12:35.924285] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.541 [2024-07-14 10:12:35.924292] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.541 [2024-07-14 10:12:35.924296] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.924300] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.541 [2024-07-14 10:12:35.924311] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.924316] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.924320] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.541 [2024-07-14 10:12:35.924327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.541 [2024-07-14 10:12:35.924347] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.541 [2024-07-14 10:12:35.924402] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.541 [2024-07-14 10:12:35.924408] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.541 [2024-07-14 10:12:35.924412] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.924416] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.541 [2024-07-14 10:12:35.924427] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.924431] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.924435] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.541 [2024-07-14 10:12:35.924442] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.541 [2024-07-14 10:12:35.924461] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.541 [2024-07-14 10:12:35.924515] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.541 [2024-07-14 10:12:35.924521] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.541 [2024-07-14 10:12:35.924525] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.924529] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.541 [2024-07-14 10:12:35.924540] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.924544] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.924548] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.541 [2024-07-14 10:12:35.924555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.541 [2024-07-14 10:12:35.924573] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.541 [2024-07-14 10:12:35.924624] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.541 [2024-07-14 10:12:35.924631] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.541 [2024-07-14 10:12:35.924635] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.924639] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.541 [2024-07-14 10:12:35.924651] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.924658] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.924664] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.541 [2024-07-14 10:12:35.924674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.541 [2024-07-14 10:12:35.924697] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.541 [2024-07-14 10:12:35.924753] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.541 [2024-07-14 10:12:35.924759] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.541 [2024-07-14 10:12:35.924763] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.924767] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.541 [2024-07-14 10:12:35.924779] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.924783] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.924787] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.541 [2024-07-14 10:12:35.924794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.541 [2024-07-14 10:12:35.924812] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.541 [2024-07-14 10:12:35.924867] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.541 [2024-07-14 10:12:35.924873] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.541 [2024-07-14 10:12:35.924877] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.924881] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.541 [2024-07-14 10:12:35.924892] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.924897] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.924900] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.541 [2024-07-14 10:12:35.924907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.541 [2024-07-14 10:12:35.924926] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.541 [2024-07-14 10:12:35.924979] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.541 [2024-07-14 10:12:35.924986] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.541 [2024-07-14 10:12:35.924990] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.924994] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.541 [2024-07-14 10:12:35.925005] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.925009] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.541 [2024-07-14 10:12:35.925013] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.542 [2024-07-14 10:12:35.925020] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.542 [2024-07-14 10:12:35.925038] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.542 [2024-07-14 10:12:35.925090] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.542 [2024-07-14 10:12:35.925096] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.542 [2024-07-14 10:12:35.925100] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.925104] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.542 [2024-07-14 10:12:35.925115] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.925119] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.925123] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.542 [2024-07-14 10:12:35.925130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.542 [2024-07-14 10:12:35.925148] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.542 [2024-07-14 10:12:35.925212] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.542 [2024-07-14 10:12:35.925220] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.542 [2024-07-14 10:12:35.925224] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.925228] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.542 [2024-07-14 10:12:35.925240] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.925244] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.925248] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.542 [2024-07-14 10:12:35.925255] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.542 [2024-07-14 10:12:35.925276] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.542 [2024-07-14 10:12:35.925335] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.542 [2024-07-14 10:12:35.925341] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.542 [2024-07-14 10:12:35.925345] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.925349] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.542 [2024-07-14 10:12:35.925360] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.925364] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.925368] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.542 [2024-07-14 10:12:35.925375] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.542 [2024-07-14 10:12:35.925393] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.542 [2024-07-14 10:12:35.925449] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.542 [2024-07-14 10:12:35.925455] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.542 [2024-07-14 10:12:35.925459] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.925463] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.542 [2024-07-14 10:12:35.925474] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.925478] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.925482] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.542 [2024-07-14 10:12:35.925489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.542 [2024-07-14 10:12:35.925508] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.542 [2024-07-14 10:12:35.925559] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.542 [2024-07-14 10:12:35.925565] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.542 [2024-07-14 10:12:35.925569] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.925573] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.542 [2024-07-14 10:12:35.925584] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.925588] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.925592] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.542 [2024-07-14 10:12:35.925599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.542 [2024-07-14 10:12:35.925617] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.542 [2024-07-14 10:12:35.925671] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.542 [2024-07-14 10:12:35.925678] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.542 [2024-07-14 10:12:35.925682] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.925686] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.542 [2024-07-14 10:12:35.925697] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.925701] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.925705] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.542 [2024-07-14 10:12:35.925712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.542 [2024-07-14 10:12:35.925730] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.542 [2024-07-14 10:12:35.925784] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.542 [2024-07-14 10:12:35.925790] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.542 [2024-07-14 10:12:35.925794] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.925798] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.542 [2024-07-14 10:12:35.925809] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.925813] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.925817] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.542 [2024-07-14 10:12:35.925824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.542 [2024-07-14 10:12:35.925842] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.542 [2024-07-14 10:12:35.925896] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.542 [2024-07-14 10:12:35.925902] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.542 [2024-07-14 10:12:35.925906] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.925910] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.542 [2024-07-14 10:12:35.925921] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.925926] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.925929] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.542 [2024-07-14 10:12:35.925936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.542 [2024-07-14 10:12:35.925954] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.542 [2024-07-14 10:12:35.926006] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.542 [2024-07-14 10:12:35.926013] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.542 [2024-07-14 10:12:35.926016] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.926020] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.542 [2024-07-14 10:12:35.926031] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.926036] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.926039] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.542 [2024-07-14 10:12:35.926046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.542 [2024-07-14 10:12:35.926065] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.542 [2024-07-14 10:12:35.926116] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.542 [2024-07-14 10:12:35.926122] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.542 [2024-07-14 10:12:35.926126] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.926130] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.542 [2024-07-14 10:12:35.926141] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.926145] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.926149] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.542 [2024-07-14 10:12:35.926156] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.542 [2024-07-14 10:12:35.926174] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.542 [2024-07-14 10:12:35.926260] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.542 [2024-07-14 10:12:35.926268] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.542 [2024-07-14 10:12:35.926272] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.926276] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.542 [2024-07-14 10:12:35.926288] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.926292] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.926296] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.542 [2024-07-14 10:12:35.926304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.542 [2024-07-14 10:12:35.926325] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.542 [2024-07-14 10:12:35.926380] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.542 [2024-07-14 10:12:35.926387] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.542 [2024-07-14 10:12:35.926391] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.926395] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.542 [2024-07-14 10:12:35.926406] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.926411] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.926415] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.542 [2024-07-14 10:12:35.926422] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.542 [2024-07-14 10:12:35.926451] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.542 [2024-07-14 10:12:35.926509] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.542 [2024-07-14 10:12:35.926516] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.542 [2024-07-14 10:12:35.926520] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.542 [2024-07-14 10:12:35.926524] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.542 [2024-07-14 10:12:35.926535] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.543 [2024-07-14 10:12:35.926540] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.543 [2024-07-14 10:12:35.926544] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.543 [2024-07-14 10:12:35.926551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.543 [2024-07-14 10:12:35.926570] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.543 [2024-07-14 10:12:35.926638] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.543 [2024-07-14 10:12:35.926652] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.543 [2024-07-14 10:12:35.926657] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.543 [2024-07-14 10:12:35.926661] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.543 [2024-07-14 10:12:35.926673] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.543 [2024-07-14 10:12:35.926677] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.543 [2024-07-14 10:12:35.926681] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.543 [2024-07-14 10:12:35.926688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.543 [2024-07-14 10:12:35.926708] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.543 [2024-07-14 10:12:35.926760] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.543 [2024-07-14 10:12:35.926773] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.543 [2024-07-14 10:12:35.926778] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.543 [2024-07-14 10:12:35.926782] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.543 [2024-07-14 10:12:35.926801] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.543 [2024-07-14 10:12:35.926806] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.543 [2024-07-14 10:12:35.926810] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.543 [2024-07-14 10:12:35.926817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.543 [2024-07-14 10:12:35.926838] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.543 [2024-07-14 10:12:35.926889] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.543 [2024-07-14 10:12:35.926900] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.543 [2024-07-14 10:12:35.926905] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.543 [2024-07-14 10:12:35.926909] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.543 [2024-07-14 10:12:35.926920] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.543 [2024-07-14 10:12:35.926925] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.543 [2024-07-14 10:12:35.926929] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.543 [2024-07-14 10:12:35.926936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.543 [2024-07-14 10:12:35.926955] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.543 [2024-07-14 10:12:35.927047] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.543 [2024-07-14 10:12:35.927054] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.543 [2024-07-14 10:12:35.927058] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.543 [2024-07-14 10:12:35.927062] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.543 [2024-07-14 10:12:35.927074] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.543 [2024-07-14 10:12:35.927078] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.543 [2024-07-14 10:12:35.927082] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.543 [2024-07-14 10:12:35.927090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.543 [2024-07-14 10:12:35.927108] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.543 [2024-07-14 10:12:35.927165] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.543 [2024-07-14 10:12:35.927175] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.543 [2024-07-14 10:12:35.927180] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.543 [2024-07-14 10:12:35.927184] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.543 [2024-07-14 10:12:35.927196] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:05.543 [2024-07-14 10:12:35.931265] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:05.543 [2024-07-14 10:12:35.931290] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c49d70) 00:20:05.543 [2024-07-14 10:12:35.931299] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:05.543 [2024-07-14 10:12:35.931325] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1c93a10, cid 3, qid 0 00:20:05.543 [2024-07-14 10:12:35.931389] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:05.543 [2024-07-14 10:12:35.931396] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:05.543 [2024-07-14 10:12:35.931400] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:05.543 [2024-07-14 10:12:35.931404] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1c93a10) on tqpair=0x1c49d70 00:20:05.543 [2024-07-14 10:12:35.931413] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:20:05.543 0 Kelvin (-273 Celsius) 00:20:05.543 Available Spare: 0% 00:20:05.543 Available Spare Threshold: 0% 00:20:05.543 Life Percentage Used: 0% 00:20:05.543 Data Units Read: 0 00:20:05.543 Data Units Written: 0 00:20:05.543 Host Read Commands: 0 00:20:05.543 Host Write Commands: 0 00:20:05.543 Controller Busy Time: 0 minutes 00:20:05.543 Power Cycles: 0 00:20:05.543 Power On Hours: 0 hours 00:20:05.543 Unsafe Shutdowns: 0 00:20:05.543 Unrecoverable Media Errors: 0 00:20:05.543 Lifetime Error Log Entries: 0 00:20:05.543 Warning Temperature Time: 0 minutes 00:20:05.543 Critical Temperature Time: 0 minutes 00:20:05.543 00:20:05.543 Number of Queues 00:20:05.543 ================ 00:20:05.543 Number of I/O Submission Queues: 127 00:20:05.543 Number of I/O Completion Queues: 127 00:20:05.543 00:20:05.543 Active Namespaces 00:20:05.543 ================= 00:20:05.543 Namespace ID:1 00:20:05.543 Error Recovery Timeout: Unlimited 00:20:05.543 Command Set Identifier: NVM (00h) 00:20:05.543 Deallocate: Supported 00:20:05.543 Deallocated/Unwritten Error: Not Supported 00:20:05.543 Deallocated Read Value: Unknown 00:20:05.543 Deallocate in Write Zeroes: Not Supported 00:20:05.543 Deallocated Guard Field: 0xFFFF 00:20:05.543 Flush: Supported 00:20:05.543 Reservation: Supported 00:20:05.543 Namespace Sharing Capabilities: Multiple Controllers 00:20:05.543 Size (in LBAs): 131072 (0GiB) 00:20:05.543 Capacity (in LBAs): 131072 (0GiB) 00:20:05.543 Utilization (in LBAs): 131072 (0GiB) 00:20:05.543 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:05.543 EUI64: ABCDEF0123456789 00:20:05.543 UUID: 83bb02a4-9a69-4328-a1c9-e7899125d9ce 00:20:05.543 Thin Provisioning: Not Supported 00:20:05.543 Per-NS Atomic Units: Yes 00:20:05.543 Atomic Boundary Size (Normal): 0 00:20:05.543 Atomic Boundary Size (PFail): 0 00:20:05.543 Atomic Boundary Offset: 0 00:20:05.543 Maximum Single Source Range Length: 65535 00:20:05.543 Maximum Copy Length: 65535 00:20:05.543 Maximum Source Range Count: 1 00:20:05.543 NGUID/EUI64 Never Reused: No 00:20:05.543 Namespace Write Protected: No 00:20:05.543 Number of LBA Formats: 1 00:20:05.543 Current LBA Format: LBA Format #00 00:20:05.543 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:05.543 00:20:05.543 10:12:35 -- host/identify.sh@51 -- # sync 00:20:05.543 10:12:35 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:05.543 10:12:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:05.543 10:12:35 -- common/autotest_common.sh@10 -- # set +x 00:20:05.543 10:12:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:05.543 10:12:36 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:05.543 10:12:36 -- host/identify.sh@56 -- # nvmftestfini 00:20:05.543 10:12:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:05.543 10:12:36 -- nvmf/common.sh@116 -- # sync 00:20:05.543 10:12:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:05.543 10:12:36 -- nvmf/common.sh@119 -- # set +e 00:20:05.543 10:12:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:05.543 10:12:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:05.543 rmmod nvme_tcp 00:20:05.543 rmmod nvme_fabrics 00:20:05.543 rmmod nvme_keyring 00:20:05.543 10:12:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:05.543 10:12:36 -- nvmf/common.sh@123 -- # set -e 00:20:05.543 10:12:36 -- nvmf/common.sh@124 -- # return 0 00:20:05.543 10:12:36 -- nvmf/common.sh@477 -- # '[' -n 92852 ']' 00:20:05.543 10:12:36 -- nvmf/common.sh@478 -- # killprocess 92852 00:20:05.543 10:12:36 -- common/autotest_common.sh@926 -- # '[' -z 92852 ']' 00:20:05.543 10:12:36 -- common/autotest_common.sh@930 -- # kill -0 92852 00:20:05.543 10:12:36 -- common/autotest_common.sh@931 -- # uname 00:20:05.543 10:12:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:05.543 10:12:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 92852 00:20:05.801 10:12:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:05.801 10:12:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:05.801 killing process with pid 92852 00:20:05.801 10:12:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 92852' 00:20:05.801 10:12:36 -- common/autotest_common.sh@945 -- # kill 92852 00:20:05.801 [2024-07-14 10:12:36.082081] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:05.801 10:12:36 -- common/autotest_common.sh@950 -- # wait 92852 00:20:05.801 10:12:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:05.801 10:12:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:05.801 10:12:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:05.801 10:12:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:05.801 10:12:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:05.801 10:12:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.801 10:12:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:05.801 10:12:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.060 10:12:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:06.060 00:20:06.060 real 0m2.598s 00:20:06.060 user 0m7.225s 00:20:06.060 sys 0m0.667s 00:20:06.060 10:12:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:06.060 10:12:36 -- common/autotest_common.sh@10 -- # set +x 00:20:06.060 ************************************ 00:20:06.060 END TEST nvmf_identify 00:20:06.060 ************************************ 00:20:06.060 10:12:36 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:06.060 10:12:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:06.060 10:12:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:06.060 10:12:36 -- common/autotest_common.sh@10 -- # set +x 00:20:06.060 ************************************ 00:20:06.060 START TEST nvmf_perf 00:20:06.060 ************************************ 00:20:06.060 10:12:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:06.060 * Looking for test storage... 00:20:06.060 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:06.060 10:12:36 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:06.060 10:12:36 -- nvmf/common.sh@7 -- # uname -s 00:20:06.060 10:12:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:06.060 10:12:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:06.060 10:12:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:06.060 10:12:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:06.060 10:12:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:06.060 10:12:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:06.060 10:12:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:06.060 10:12:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:06.060 10:12:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:06.060 10:12:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:06.060 10:12:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:20:06.060 10:12:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:20:06.060 10:12:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:06.060 10:12:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:06.060 10:12:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:06.060 10:12:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:06.060 10:12:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:06.060 10:12:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:06.060 10:12:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:06.060 10:12:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.060 10:12:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.060 10:12:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.060 10:12:36 -- paths/export.sh@5 -- # export PATH 00:20:06.060 10:12:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:06.060 10:12:36 -- nvmf/common.sh@46 -- # : 0 00:20:06.060 10:12:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:06.060 10:12:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:06.060 10:12:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:06.060 10:12:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:06.060 10:12:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:06.060 10:12:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:06.060 10:12:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:06.060 10:12:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:06.060 10:12:36 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:06.061 10:12:36 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:06.061 10:12:36 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:06.061 10:12:36 -- host/perf.sh@17 -- # nvmftestinit 00:20:06.061 10:12:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:06.061 10:12:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:06.061 10:12:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:06.061 10:12:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:06.061 10:12:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:06.061 10:12:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.061 10:12:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:06.061 10:12:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:06.061 10:12:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:06.061 10:12:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:06.061 10:12:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:06.061 10:12:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:06.061 10:12:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:06.061 10:12:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:06.061 10:12:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:06.061 10:12:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:06.061 10:12:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:06.061 10:12:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:06.061 10:12:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:06.061 10:12:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:06.061 10:12:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:06.061 10:12:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:06.061 10:12:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:06.061 10:12:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:06.061 10:12:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:06.061 10:12:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:06.061 10:12:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:06.061 10:12:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:06.061 Cannot find device "nvmf_tgt_br" 00:20:06.061 10:12:36 -- nvmf/common.sh@154 -- # true 00:20:06.061 10:12:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:06.061 Cannot find device "nvmf_tgt_br2" 00:20:06.061 10:12:36 -- nvmf/common.sh@155 -- # true 00:20:06.061 10:12:36 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:06.061 10:12:36 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:06.061 Cannot find device "nvmf_tgt_br" 00:20:06.061 10:12:36 -- nvmf/common.sh@157 -- # true 00:20:06.061 10:12:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:06.061 Cannot find device "nvmf_tgt_br2" 00:20:06.061 10:12:36 -- nvmf/common.sh@158 -- # true 00:20:06.061 10:12:36 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:06.319 10:12:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:06.319 10:12:36 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:06.319 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:06.319 10:12:36 -- nvmf/common.sh@161 -- # true 00:20:06.319 10:12:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:06.319 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:06.319 10:12:36 -- nvmf/common.sh@162 -- # true 00:20:06.319 10:12:36 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:06.319 10:12:36 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:06.319 10:12:36 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:06.319 10:12:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:06.319 10:12:36 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:06.319 10:12:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:06.319 10:12:36 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:06.319 10:12:36 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:06.319 10:12:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:06.319 10:12:36 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:06.319 10:12:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:06.319 10:12:36 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:06.319 10:12:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:06.319 10:12:36 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:06.319 10:12:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:06.319 10:12:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:06.319 10:12:36 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:06.319 10:12:36 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:06.319 10:12:36 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:06.319 10:12:36 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:06.319 10:12:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:06.319 10:12:36 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:06.319 10:12:36 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:06.319 10:12:36 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:06.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:06.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:20:06.319 00:20:06.319 --- 10.0.0.2 ping statistics --- 00:20:06.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.319 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:20:06.319 10:12:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:06.319 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:06.319 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:20:06.319 00:20:06.319 --- 10.0.0.3 ping statistics --- 00:20:06.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.319 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:20:06.319 10:12:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:06.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:06.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:20:06.319 00:20:06.319 --- 10.0.0.1 ping statistics --- 00:20:06.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.319 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:20:06.319 10:12:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:06.319 10:12:36 -- nvmf/common.sh@421 -- # return 0 00:20:06.319 10:12:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:06.319 10:12:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:06.319 10:12:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:06.319 10:12:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:06.319 10:12:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:06.319 10:12:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:06.319 10:12:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:06.577 10:12:36 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:06.577 10:12:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:06.577 10:12:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:06.577 10:12:36 -- common/autotest_common.sh@10 -- # set +x 00:20:06.577 10:12:36 -- nvmf/common.sh@469 -- # nvmfpid=93071 00:20:06.577 10:12:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:06.577 10:12:36 -- nvmf/common.sh@470 -- # waitforlisten 93071 00:20:06.577 10:12:36 -- common/autotest_common.sh@819 -- # '[' -z 93071 ']' 00:20:06.577 10:12:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.577 10:12:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:06.577 10:12:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.577 10:12:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:06.577 10:12:36 -- common/autotest_common.sh@10 -- # set +x 00:20:06.577 [2024-07-14 10:12:36.928434] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:06.577 [2024-07-14 10:12:36.928547] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:06.577 [2024-07-14 10:12:37.065297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:06.834 [2024-07-14 10:12:37.164320] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:06.834 [2024-07-14 10:12:37.164505] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:06.834 [2024-07-14 10:12:37.164521] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:06.834 [2024-07-14 10:12:37.164533] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:06.834 [2024-07-14 10:12:37.164704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:06.834 [2024-07-14 10:12:37.165250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:06.834 [2024-07-14 10:12:37.165463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:06.834 [2024-07-14 10:12:37.165471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:07.399 10:12:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:07.399 10:12:37 -- common/autotest_common.sh@852 -- # return 0 00:20:07.399 10:12:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:07.399 10:12:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:07.399 10:12:37 -- common/autotest_common.sh@10 -- # set +x 00:20:07.657 10:12:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.657 10:12:37 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:07.657 10:12:37 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:07.914 10:12:38 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:07.914 10:12:38 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:08.173 10:12:38 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:20:08.173 10:12:38 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:08.432 10:12:38 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:08.432 10:12:38 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:20:08.432 10:12:38 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:08.432 10:12:38 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:08.432 10:12:38 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:08.691 [2024-07-14 10:12:39.032651] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:08.691 10:12:39 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:08.951 10:12:39 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:08.951 10:12:39 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:08.951 10:12:39 -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:08.951 10:12:39 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:09.210 10:12:39 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:09.469 [2024-07-14 10:12:39.861714] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.469 10:12:39 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:09.728 10:12:40 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:20:09.728 10:12:40 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:09.728 10:12:40 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:09.728 10:12:40 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:20:10.666 Initializing NVMe Controllers 00:20:10.666 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:20:10.666 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:20:10.666 Initialization complete. Launching workers. 00:20:10.666 ======================================================== 00:20:10.666 Latency(us) 00:20:10.666 Device Information : IOPS MiB/s Average min max 00:20:10.666 PCIE (0000:00:06.0) NSID 1 from core 0: 22303.00 87.12 1444.77 409.05 8531.93 00:20:10.666 ======================================================== 00:20:10.666 Total : 22303.00 87.12 1444.77 409.05 8531.93 00:20:10.666 00:20:10.666 10:12:41 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:12.044 Initializing NVMe Controllers 00:20:12.044 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:12.044 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:12.044 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:12.044 Initialization complete. Launching workers. 00:20:12.044 ======================================================== 00:20:12.044 Latency(us) 00:20:12.044 Device Information : IOPS MiB/s Average min max 00:20:12.044 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4028.15 15.73 247.97 98.90 7197.88 00:20:12.044 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 122.88 0.48 8202.01 7045.58 12012.62 00:20:12.044 ======================================================== 00:20:12.044 Total : 4151.03 16.21 483.44 98.90 12012.62 00:20:12.044 00:20:12.044 10:12:42 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:13.437 Initializing NVMe Controllers 00:20:13.437 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:13.437 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:13.437 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:13.437 Initialization complete. Launching workers. 00:20:13.437 ======================================================== 00:20:13.437 Latency(us) 00:20:13.437 Device Information : IOPS MiB/s Average min max 00:20:13.437 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10166.93 39.71 3146.92 599.22 6756.26 00:20:13.437 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2643.18 10.32 12129.51 7321.41 20397.67 00:20:13.437 ======================================================== 00:20:13.437 Total : 12810.11 50.04 5000.35 599.22 20397.67 00:20:13.437 00:20:13.437 10:12:43 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:20:13.437 10:12:43 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:15.982 Initializing NVMe Controllers 00:20:15.982 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:15.982 Controller IO queue size 128, less than required. 00:20:15.982 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:15.982 Controller IO queue size 128, less than required. 00:20:15.982 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:15.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:15.982 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:15.983 Initialization complete. Launching workers. 00:20:15.983 ======================================================== 00:20:15.983 Latency(us) 00:20:15.983 Device Information : IOPS MiB/s Average min max 00:20:15.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1730.00 432.50 78695.38 45777.84 295977.16 00:20:15.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 588.00 147.00 230071.40 129941.57 348351.28 00:20:15.983 ======================================================== 00:20:15.983 Total : 2318.00 579.50 117094.47 45777.84 348351.28 00:20:15.983 00:20:15.983 10:12:46 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:16.242 No valid NVMe controllers or AIO or URING devices found 00:20:16.242 Initializing NVMe Controllers 00:20:16.242 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:16.242 Controller IO queue size 128, less than required. 00:20:16.242 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:16.242 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:16.242 Controller IO queue size 128, less than required. 00:20:16.242 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:16.242 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:20:16.242 WARNING: Some requested NVMe devices were skipped 00:20:16.242 10:12:46 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:18.775 Initializing NVMe Controllers 00:20:18.775 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:18.775 Controller IO queue size 128, less than required. 00:20:18.775 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:18.775 Controller IO queue size 128, less than required. 00:20:18.775 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:18.775 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:18.775 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:18.775 Initialization complete. Launching workers. 00:20:18.775 00:20:18.775 ==================== 00:20:18.775 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:18.775 TCP transport: 00:20:18.775 polls: 9978 00:20:18.775 idle_polls: 5272 00:20:18.775 sock_completions: 4706 00:20:18.775 nvme_completions: 2991 00:20:18.775 submitted_requests: 4677 00:20:18.775 queued_requests: 1 00:20:18.775 00:20:18.775 ==================== 00:20:18.775 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:18.775 TCP transport: 00:20:18.775 polls: 10074 00:20:18.775 idle_polls: 6158 00:20:18.775 sock_completions: 3916 00:20:18.775 nvme_completions: 7306 00:20:18.775 submitted_requests: 11040 00:20:18.775 queued_requests: 1 00:20:18.775 ======================================================== 00:20:18.775 Latency(us) 00:20:18.775 Device Information : IOPS MiB/s Average min max 00:20:18.775 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 811.47 202.87 162492.88 95978.07 287931.57 00:20:18.775 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1889.93 472.48 67807.16 35253.50 113849.80 00:20:18.775 ======================================================== 00:20:18.775 Total : 2701.40 675.35 96249.68 35253.50 287931.57 00:20:18.775 00:20:18.775 10:12:49 -- host/perf.sh@66 -- # sync 00:20:18.775 10:12:49 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:19.035 10:12:49 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:20:19.035 10:12:49 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:20:19.035 10:12:49 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:20:19.293 10:12:49 -- host/perf.sh@72 -- # ls_guid=fa114653-1a17-41e4-afae-3bfa1ece7b67 00:20:19.293 10:12:49 -- host/perf.sh@73 -- # get_lvs_free_mb fa114653-1a17-41e4-afae-3bfa1ece7b67 00:20:19.293 10:12:49 -- common/autotest_common.sh@1343 -- # local lvs_uuid=fa114653-1a17-41e4-afae-3bfa1ece7b67 00:20:19.293 10:12:49 -- common/autotest_common.sh@1344 -- # local lvs_info 00:20:19.294 10:12:49 -- common/autotest_common.sh@1345 -- # local fc 00:20:19.294 10:12:49 -- common/autotest_common.sh@1346 -- # local cs 00:20:19.294 10:12:49 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:19.552 10:12:49 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:20:19.552 { 00:20:19.552 "base_bdev": "Nvme0n1", 00:20:19.552 "block_size": 4096, 00:20:19.552 "cluster_size": 4194304, 00:20:19.552 "free_clusters": 1278, 00:20:19.552 "name": "lvs_0", 00:20:19.552 "total_data_clusters": 1278, 00:20:19.552 "uuid": "fa114653-1a17-41e4-afae-3bfa1ece7b67" 00:20:19.552 } 00:20:19.552 ]' 00:20:19.552 10:12:49 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="fa114653-1a17-41e4-afae-3bfa1ece7b67") .free_clusters' 00:20:19.552 10:12:50 -- common/autotest_common.sh@1348 -- # fc=1278 00:20:19.552 10:12:50 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="fa114653-1a17-41e4-afae-3bfa1ece7b67") .cluster_size' 00:20:19.552 10:12:50 -- common/autotest_common.sh@1349 -- # cs=4194304 00:20:19.552 10:12:50 -- common/autotest_common.sh@1352 -- # free_mb=5112 00:20:19.552 5112 00:20:19.552 10:12:50 -- common/autotest_common.sh@1353 -- # echo 5112 00:20:19.552 10:12:50 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:20:19.552 10:12:50 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u fa114653-1a17-41e4-afae-3bfa1ece7b67 lbd_0 5112 00:20:19.810 10:12:50 -- host/perf.sh@80 -- # lb_guid=ba84769e-1f40-4121-837d-19289c975fd4 00:20:19.810 10:12:50 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore ba84769e-1f40-4121-837d-19289c975fd4 lvs_n_0 00:20:20.376 10:12:50 -- host/perf.sh@83 -- # ls_nested_guid=44f523bc-c27a-4bd8-97c9-be14acf3ca2a 00:20:20.376 10:12:50 -- host/perf.sh@84 -- # get_lvs_free_mb 44f523bc-c27a-4bd8-97c9-be14acf3ca2a 00:20:20.376 10:12:50 -- common/autotest_common.sh@1343 -- # local lvs_uuid=44f523bc-c27a-4bd8-97c9-be14acf3ca2a 00:20:20.376 10:12:50 -- common/autotest_common.sh@1344 -- # local lvs_info 00:20:20.376 10:12:50 -- common/autotest_common.sh@1345 -- # local fc 00:20:20.376 10:12:50 -- common/autotest_common.sh@1346 -- # local cs 00:20:20.376 10:12:50 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:20.376 10:12:50 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:20:20.376 { 00:20:20.376 "base_bdev": "Nvme0n1", 00:20:20.376 "block_size": 4096, 00:20:20.376 "cluster_size": 4194304, 00:20:20.376 "free_clusters": 0, 00:20:20.376 "name": "lvs_0", 00:20:20.376 "total_data_clusters": 1278, 00:20:20.376 "uuid": "fa114653-1a17-41e4-afae-3bfa1ece7b67" 00:20:20.376 }, 00:20:20.376 { 00:20:20.376 "base_bdev": "ba84769e-1f40-4121-837d-19289c975fd4", 00:20:20.376 "block_size": 4096, 00:20:20.376 "cluster_size": 4194304, 00:20:20.376 "free_clusters": 1276, 00:20:20.376 "name": "lvs_n_0", 00:20:20.376 "total_data_clusters": 1276, 00:20:20.376 "uuid": "44f523bc-c27a-4bd8-97c9-be14acf3ca2a" 00:20:20.376 } 00:20:20.376 ]' 00:20:20.376 10:12:50 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="44f523bc-c27a-4bd8-97c9-be14acf3ca2a") .free_clusters' 00:20:20.634 10:12:50 -- common/autotest_common.sh@1348 -- # fc=1276 00:20:20.634 10:12:50 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="44f523bc-c27a-4bd8-97c9-be14acf3ca2a") .cluster_size' 00:20:20.634 10:12:51 -- common/autotest_common.sh@1349 -- # cs=4194304 00:20:20.634 10:12:51 -- common/autotest_common.sh@1352 -- # free_mb=5104 00:20:20.634 10:12:51 -- common/autotest_common.sh@1353 -- # echo 5104 00:20:20.634 5104 00:20:20.634 10:12:51 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:20:20.634 10:12:51 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 44f523bc-c27a-4bd8-97c9-be14acf3ca2a lbd_nest_0 5104 00:20:20.892 10:12:51 -- host/perf.sh@88 -- # lb_nested_guid=70e16ba9-38f2-4ee4-86c8-50dc4cc4e4fc 00:20:20.892 10:12:51 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:20.892 10:12:51 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:20:20.892 10:12:51 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 70e16ba9-38f2-4ee4-86c8-50dc4cc4e4fc 00:20:21.150 10:12:51 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:21.407 10:12:51 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:20:21.407 10:12:51 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:20:21.407 10:12:51 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:21.407 10:12:51 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:21.407 10:12:51 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:21.665 No valid NVMe controllers or AIO or URING devices found 00:20:21.923 Initializing NVMe Controllers 00:20:21.923 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:21.923 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:21.923 WARNING: Some requested NVMe devices were skipped 00:20:21.923 10:12:52 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:21.923 10:12:52 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:34.123 Initializing NVMe Controllers 00:20:34.123 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:34.123 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:34.123 Initialization complete. Launching workers. 00:20:34.123 ======================================================== 00:20:34.123 Latency(us) 00:20:34.123 Device Information : IOPS MiB/s Average min max 00:20:34.123 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 885.40 110.67 1128.68 355.58 7621.90 00:20:34.123 ======================================================== 00:20:34.123 Total : 885.40 110.67 1128.68 355.58 7621.90 00:20:34.123 00:20:34.124 10:13:02 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:34.124 10:13:02 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:34.124 10:13:02 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:34.124 No valid NVMe controllers or AIO or URING devices found 00:20:34.124 Initializing NVMe Controllers 00:20:34.124 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:34.124 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:34.124 WARNING: Some requested NVMe devices were skipped 00:20:34.124 10:13:02 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:34.124 10:13:02 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:44.096 Initializing NVMe Controllers 00:20:44.096 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:44.096 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:44.096 Initialization complete. Launching workers. 00:20:44.096 ======================================================== 00:20:44.096 Latency(us) 00:20:44.096 Device Information : IOPS MiB/s Average min max 00:20:44.096 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1130.50 141.31 28347.94 8071.91 254502.19 00:20:44.096 ======================================================== 00:20:44.096 Total : 1130.50 141.31 28347.94 8071.91 254502.19 00:20:44.096 00:20:44.096 10:13:13 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:20:44.096 10:13:13 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:44.096 10:13:13 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:44.096 No valid NVMe controllers or AIO or URING devices found 00:20:44.096 Initializing NVMe Controllers 00:20:44.096 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:44.096 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:20:44.096 WARNING: Some requested NVMe devices were skipped 00:20:44.096 10:13:13 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:20:44.096 10:13:13 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:54.150 Initializing NVMe Controllers 00:20:54.151 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:54.151 Controller IO queue size 128, less than required. 00:20:54.151 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:54.151 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:54.151 Initialization complete. Launching workers. 00:20:54.151 ======================================================== 00:20:54.151 Latency(us) 00:20:54.151 Device Information : IOPS MiB/s Average min max 00:20:54.151 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4158.82 519.85 30802.97 11407.99 63859.27 00:20:54.151 ======================================================== 00:20:54.151 Total : 4158.82 519.85 30802.97 11407.99 63859.27 00:20:54.151 00:20:54.151 10:13:23 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:54.151 10:13:24 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 70e16ba9-38f2-4ee4-86c8-50dc4cc4e4fc 00:20:54.151 10:13:24 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:20:54.151 10:13:24 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ba84769e-1f40-4121-837d-19289c975fd4 00:20:54.409 10:13:24 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:20:54.667 10:13:24 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:54.667 10:13:24 -- host/perf.sh@114 -- # nvmftestfini 00:20:54.667 10:13:24 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:54.667 10:13:24 -- nvmf/common.sh@116 -- # sync 00:20:54.667 10:13:24 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:54.667 10:13:24 -- nvmf/common.sh@119 -- # set +e 00:20:54.667 10:13:24 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:54.667 10:13:24 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:54.667 rmmod nvme_tcp 00:20:54.667 rmmod nvme_fabrics 00:20:54.667 rmmod nvme_keyring 00:20:54.667 10:13:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:54.667 10:13:25 -- nvmf/common.sh@123 -- # set -e 00:20:54.667 10:13:25 -- nvmf/common.sh@124 -- # return 0 00:20:54.667 10:13:25 -- nvmf/common.sh@477 -- # '[' -n 93071 ']' 00:20:54.667 10:13:25 -- nvmf/common.sh@478 -- # killprocess 93071 00:20:54.667 10:13:25 -- common/autotest_common.sh@926 -- # '[' -z 93071 ']' 00:20:54.667 10:13:25 -- common/autotest_common.sh@930 -- # kill -0 93071 00:20:54.667 10:13:25 -- common/autotest_common.sh@931 -- # uname 00:20:54.667 10:13:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:54.667 10:13:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 93071 00:20:54.667 killing process with pid 93071 00:20:54.667 10:13:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:54.667 10:13:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:54.667 10:13:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 93071' 00:20:54.667 10:13:25 -- common/autotest_common.sh@945 -- # kill 93071 00:20:54.667 10:13:25 -- common/autotest_common.sh@950 -- # wait 93071 00:20:54.925 10:13:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:54.925 10:13:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:54.925 10:13:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:54.925 10:13:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:54.925 10:13:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:54.925 10:13:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.925 10:13:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:54.925 10:13:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:54.925 10:13:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:54.925 00:20:54.925 real 0m48.915s 00:20:54.925 user 3m4.588s 00:20:54.925 sys 0m10.388s 00:20:54.925 10:13:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:54.925 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:20:54.925 ************************************ 00:20:54.925 END TEST nvmf_perf 00:20:54.925 ************************************ 00:20:54.926 10:13:25 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:54.926 10:13:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:20:54.926 10:13:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:54.926 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:20:54.926 ************************************ 00:20:54.926 START TEST nvmf_fio_host 00:20:54.926 ************************************ 00:20:54.926 10:13:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:54.926 * Looking for test storage... 00:20:54.926 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:54.926 10:13:25 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:54.926 10:13:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:54.926 10:13:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:54.926 10:13:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:54.926 10:13:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.926 10:13:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.926 10:13:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.926 10:13:25 -- paths/export.sh@5 -- # export PATH 00:20:54.926 10:13:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.926 10:13:25 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:54.926 10:13:25 -- nvmf/common.sh@7 -- # uname -s 00:20:54.926 10:13:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:54.926 10:13:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:54.926 10:13:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:54.926 10:13:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:54.926 10:13:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:54.926 10:13:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:54.926 10:13:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:54.926 10:13:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:54.926 10:13:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:54.926 10:13:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:54.926 10:13:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:20:54.926 10:13:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:20:54.926 10:13:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:54.926 10:13:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:54.926 10:13:25 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:54.926 10:13:25 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:54.926 10:13:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:54.926 10:13:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:54.926 10:13:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:54.926 10:13:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.926 10:13:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.926 10:13:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.926 10:13:25 -- paths/export.sh@5 -- # export PATH 00:20:54.926 10:13:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:54.926 10:13:25 -- nvmf/common.sh@46 -- # : 0 00:20:54.926 10:13:25 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:54.926 10:13:25 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:54.926 10:13:25 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:54.926 10:13:25 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:54.926 10:13:25 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:54.926 10:13:25 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:54.926 10:13:25 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:54.926 10:13:25 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:54.926 10:13:25 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:54.926 10:13:25 -- host/fio.sh@14 -- # nvmftestinit 00:20:54.926 10:13:25 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:54.926 10:13:25 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:54.926 10:13:25 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:54.926 10:13:25 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:54.926 10:13:25 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:54.926 10:13:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:54.926 10:13:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:54.926 10:13:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.185 10:13:25 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:55.185 10:13:25 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:55.185 10:13:25 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:55.185 10:13:25 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:55.185 10:13:25 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:55.185 10:13:25 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:55.185 10:13:25 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:55.185 10:13:25 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:55.185 10:13:25 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:55.185 10:13:25 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:55.185 10:13:25 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:55.185 10:13:25 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:55.185 10:13:25 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:55.185 10:13:25 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:55.185 10:13:25 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:55.185 10:13:25 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:55.185 10:13:25 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:55.185 10:13:25 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:55.185 10:13:25 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:55.185 10:13:25 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:55.185 Cannot find device "nvmf_tgt_br" 00:20:55.185 10:13:25 -- nvmf/common.sh@154 -- # true 00:20:55.185 10:13:25 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:55.185 Cannot find device "nvmf_tgt_br2" 00:20:55.185 10:13:25 -- nvmf/common.sh@155 -- # true 00:20:55.185 10:13:25 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:55.185 10:13:25 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:55.185 Cannot find device "nvmf_tgt_br" 00:20:55.185 10:13:25 -- nvmf/common.sh@157 -- # true 00:20:55.185 10:13:25 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:55.185 Cannot find device "nvmf_tgt_br2" 00:20:55.185 10:13:25 -- nvmf/common.sh@158 -- # true 00:20:55.185 10:13:25 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:55.185 10:13:25 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:55.185 10:13:25 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:55.185 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:55.185 10:13:25 -- nvmf/common.sh@161 -- # true 00:20:55.185 10:13:25 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:55.185 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:55.185 10:13:25 -- nvmf/common.sh@162 -- # true 00:20:55.185 10:13:25 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:55.185 10:13:25 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:55.185 10:13:25 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:55.185 10:13:25 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:55.186 10:13:25 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:55.186 10:13:25 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:55.186 10:13:25 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:55.186 10:13:25 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:55.186 10:13:25 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:55.186 10:13:25 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:55.186 10:13:25 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:55.186 10:13:25 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:55.186 10:13:25 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:55.186 10:13:25 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:55.186 10:13:25 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:55.186 10:13:25 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:55.186 10:13:25 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:55.186 10:13:25 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:55.186 10:13:25 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:55.186 10:13:25 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:55.444 10:13:25 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:55.444 10:13:25 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:55.444 10:13:25 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:55.444 10:13:25 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:55.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:20:55.444 00:20:55.444 --- 10.0.0.2 ping statistics --- 00:20:55.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.444 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:20:55.444 10:13:25 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:55.444 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:55.444 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:20:55.444 00:20:55.444 --- 10.0.0.3 ping statistics --- 00:20:55.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.444 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:20:55.444 10:13:25 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:55.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:55.444 00:20:55.444 --- 10.0.0.1 ping statistics --- 00:20:55.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.444 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:55.444 10:13:25 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.444 10:13:25 -- nvmf/common.sh@421 -- # return 0 00:20:55.444 10:13:25 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:55.444 10:13:25 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.444 10:13:25 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:55.444 10:13:25 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:55.444 10:13:25 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.444 10:13:25 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:55.444 10:13:25 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:55.444 10:13:25 -- host/fio.sh@16 -- # [[ y != y ]] 00:20:55.444 10:13:25 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:55.444 10:13:25 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:55.444 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:20:55.444 10:13:25 -- host/fio.sh@24 -- # nvmfpid=94024 00:20:55.445 10:13:25 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:55.445 10:13:25 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:55.445 10:13:25 -- host/fio.sh@28 -- # waitforlisten 94024 00:20:55.445 10:13:25 -- common/autotest_common.sh@819 -- # '[' -z 94024 ']' 00:20:55.445 10:13:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.445 10:13:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:55.445 10:13:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.445 10:13:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:55.445 10:13:25 -- common/autotest_common.sh@10 -- # set +x 00:20:55.445 [2024-07-14 10:13:25.849792] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:20:55.445 [2024-07-14 10:13:25.849891] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.445 [2024-07-14 10:13:25.988693] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:55.703 [2024-07-14 10:13:26.053006] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:55.703 [2024-07-14 10:13:26.053125] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.703 [2024-07-14 10:13:26.053137] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.703 [2024-07-14 10:13:26.053145] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.703 [2024-07-14 10:13:26.053285] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:55.703 [2024-07-14 10:13:26.053731] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:55.703 [2024-07-14 10:13:26.053997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:55.703 [2024-07-14 10:13:26.054000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.270 10:13:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:56.270 10:13:26 -- common/autotest_common.sh@852 -- # return 0 00:20:56.270 10:13:26 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:56.528 [2024-07-14 10:13:26.941347] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.528 10:13:26 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:56.528 10:13:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:56.528 10:13:26 -- common/autotest_common.sh@10 -- # set +x 00:20:56.528 10:13:27 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:56.811 Malloc1 00:20:56.811 10:13:27 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:57.069 10:13:27 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:57.327 10:13:27 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:57.585 [2024-07-14 10:13:27.916084] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:57.585 10:13:27 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:57.843 10:13:28 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:20:57.843 10:13:28 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:57.843 10:13:28 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:57.843 10:13:28 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:20:57.843 10:13:28 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:57.843 10:13:28 -- common/autotest_common.sh@1318 -- # local sanitizers 00:20:57.843 10:13:28 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:57.843 10:13:28 -- common/autotest_common.sh@1320 -- # shift 00:20:57.843 10:13:28 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:20:57.843 10:13:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:57.843 10:13:28 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:57.843 10:13:28 -- common/autotest_common.sh@1324 -- # grep libasan 00:20:57.843 10:13:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:57.843 10:13:28 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:57.843 10:13:28 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:57.843 10:13:28 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:20:57.843 10:13:28 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:20:57.843 10:13:28 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:20:57.843 10:13:28 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:20:57.843 10:13:28 -- common/autotest_common.sh@1324 -- # asan_lib= 00:20:57.843 10:13:28 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:20:57.843 10:13:28 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:20:57.843 10:13:28 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:57.843 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:57.843 fio-3.35 00:20:57.843 Starting 1 thread 00:21:00.373 00:21:00.373 test: (groupid=0, jobs=1): err= 0: pid=94150: Sun Jul 14 10:13:30 2024 00:21:00.373 read: IOPS=10.3k, BW=40.2MiB/s (42.2MB/s)(80.7MiB/2006msec) 00:21:00.373 slat (nsec): min=1882, max=349678, avg=2514.84, stdev=3375.43 00:21:00.373 clat (usec): min=3284, max=11865, avg=6566.90, stdev=590.83 00:21:00.373 lat (usec): min=3352, max=11867, avg=6569.41, stdev=590.64 00:21:00.373 clat percentiles (usec): 00:21:00.373 | 1.00th=[ 5342], 5.00th=[ 5735], 10.00th=[ 5866], 20.00th=[ 6063], 00:21:00.373 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6521], 60.00th=[ 6652], 00:21:00.373 | 70.00th=[ 6783], 80.00th=[ 6980], 90.00th=[ 7308], 95.00th=[ 7570], 00:21:00.373 | 99.00th=[ 8225], 99.50th=[ 8455], 99.90th=[ 9503], 99.95th=[10945], 00:21:00.373 | 99.99th=[11731] 00:21:00.373 bw ( KiB/s): min=39984, max=42240, per=99.88%, avg=41135.75, stdev=1262.20, samples=4 00:21:00.373 iops : min= 9996, max=10560, avg=10283.75, stdev=315.75, samples=4 00:21:00.373 write: IOPS=10.3k, BW=40.3MiB/s (42.2MB/s)(80.8MiB/2006msec); 0 zone resets 00:21:00.373 slat (usec): min=2, max=250, avg= 2.64, stdev= 2.33 00:21:00.373 clat (usec): min=2488, max=10791, avg=5770.02, stdev=485.24 00:21:00.373 lat (usec): min=2501, max=10794, avg=5772.66, stdev=485.13 00:21:00.373 clat percentiles (usec): 00:21:00.373 | 1.00th=[ 4686], 5.00th=[ 5080], 10.00th=[ 5211], 20.00th=[ 5407], 00:21:00.373 | 30.00th=[ 5538], 40.00th=[ 5669], 50.00th=[ 5735], 60.00th=[ 5866], 00:21:00.373 | 70.00th=[ 5997], 80.00th=[ 6128], 90.00th=[ 6325], 95.00th=[ 6587], 00:21:00.373 | 99.00th=[ 6980], 99.50th=[ 7308], 99.90th=[ 8717], 99.95th=[ 9503], 00:21:00.373 | 99.99th=[10683] 00:21:00.373 bw ( KiB/s): min=40384, max=42112, per=99.90%, avg=41193.25, stdev=802.28, samples=4 00:21:00.373 iops : min=10096, max=10528, avg=10298.25, stdev=200.62, samples=4 00:21:00.373 lat (msec) : 4=0.11%, 10=99.84%, 20=0.06% 00:21:00.373 cpu : usr=65.24%, sys=24.54%, ctx=16, majf=0, minf=5 00:21:00.373 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:00.373 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.373 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:00.373 issued rwts: total=20653,20679,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.373 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:00.373 00:21:00.373 Run status group 0 (all jobs): 00:21:00.373 READ: bw=40.2MiB/s (42.2MB/s), 40.2MiB/s-40.2MiB/s (42.2MB/s-42.2MB/s), io=80.7MiB (84.6MB), run=2006-2006msec 00:21:00.373 WRITE: bw=40.3MiB/s (42.2MB/s), 40.3MiB/s-40.3MiB/s (42.2MB/s-42.2MB/s), io=80.8MiB (84.7MB), run=2006-2006msec 00:21:00.373 10:13:30 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:00.373 10:13:30 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:00.373 10:13:30 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:21:00.373 10:13:30 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:00.373 10:13:30 -- common/autotest_common.sh@1318 -- # local sanitizers 00:21:00.373 10:13:30 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:00.373 10:13:30 -- common/autotest_common.sh@1320 -- # shift 00:21:00.373 10:13:30 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:21:00.373 10:13:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:00.373 10:13:30 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:00.373 10:13:30 -- common/autotest_common.sh@1324 -- # grep libasan 00:21:00.373 10:13:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:00.373 10:13:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:00.373 10:13:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:00.373 10:13:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:00.373 10:13:30 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:00.373 10:13:30 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:21:00.373 10:13:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:00.373 10:13:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:00.373 10:13:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:00.373 10:13:30 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:00.373 10:13:30 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:00.373 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:00.373 fio-3.35 00:21:00.373 Starting 1 thread 00:21:02.903 00:21:02.903 test: (groupid=0, jobs=1): err= 0: pid=94193: Sun Jul 14 10:13:33 2024 00:21:02.903 read: IOPS=8964, BW=140MiB/s (147MB/s)(281MiB/2006msec) 00:21:02.903 slat (usec): min=2, max=109, avg= 3.57, stdev= 2.23 00:21:02.903 clat (usec): min=1818, max=16822, avg=8552.56, stdev=2147.53 00:21:02.903 lat (usec): min=1822, max=16825, avg=8556.12, stdev=2147.72 00:21:02.903 clat percentiles (usec): 00:21:02.903 | 1.00th=[ 4293], 5.00th=[ 5342], 10.00th=[ 5866], 20.00th=[ 6521], 00:21:02.903 | 30.00th=[ 7242], 40.00th=[ 7898], 50.00th=[ 8455], 60.00th=[ 9241], 00:21:02.903 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[11207], 95.00th=[12256], 00:21:02.903 | 99.00th=[13960], 99.50th=[15008], 99.90th=[16057], 99.95th=[16450], 00:21:02.903 | 99.99th=[16712] 00:21:02.903 bw ( KiB/s): min=65184, max=76672, per=49.55%, avg=71072.00, stdev=4751.61, samples=4 00:21:02.903 iops : min= 4074, max= 4792, avg=4442.00, stdev=296.98, samples=4 00:21:02.903 write: IOPS=5184, BW=81.0MiB/s (84.9MB/s)(144MiB/1780msec); 0 zone resets 00:21:02.903 slat (usec): min=31, max=357, avg=36.67, stdev= 9.11 00:21:02.903 clat (usec): min=2432, max=17805, avg=10283.28, stdev=1791.27 00:21:02.903 lat (usec): min=2476, max=17838, avg=10319.95, stdev=1793.20 00:21:02.903 clat percentiles (usec): 00:21:02.903 | 1.00th=[ 6783], 5.00th=[ 7701], 10.00th=[ 8225], 20.00th=[ 8717], 00:21:02.903 | 30.00th=[ 9241], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10552], 00:21:02.903 | 70.00th=[11076], 80.00th=[11731], 90.00th=[12780], 95.00th=[13698], 00:21:02.903 | 99.00th=[14746], 99.50th=[15270], 99.90th=[16581], 99.95th=[17433], 00:21:02.903 | 99.99th=[17695] 00:21:02.903 bw ( KiB/s): min=68864, max=78080, per=89.00%, avg=73824.00, stdev=4169.59, samples=4 00:21:02.903 iops : min= 4304, max= 4880, avg=4614.00, stdev=260.60, samples=4 00:21:02.903 lat (msec) : 2=0.01%, 4=0.52%, 10=63.68%, 20=35.79% 00:21:02.903 cpu : usr=71.67%, sys=18.25%, ctx=4, majf=0, minf=1 00:21:02.903 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:21:02.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:02.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:02.903 issued rwts: total=17982,9228,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:02.903 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:02.903 00:21:02.903 Run status group 0 (all jobs): 00:21:02.903 READ: bw=140MiB/s (147MB/s), 140MiB/s-140MiB/s (147MB/s-147MB/s), io=281MiB (295MB), run=2006-2006msec 00:21:02.903 WRITE: bw=81.0MiB/s (84.9MB/s), 81.0MiB/s-81.0MiB/s (84.9MB/s-84.9MB/s), io=144MiB (151MB), run=1780-1780msec 00:21:02.903 10:13:33 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:02.903 10:13:33 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:21:02.903 10:13:33 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:21:02.903 10:13:33 -- host/fio.sh@51 -- # get_nvme_bdfs 00:21:02.903 10:13:33 -- common/autotest_common.sh@1498 -- # bdfs=() 00:21:02.903 10:13:33 -- common/autotest_common.sh@1498 -- # local bdfs 00:21:02.903 10:13:33 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:02.903 10:13:33 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:02.903 10:13:33 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:21:02.903 10:13:33 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:21:02.903 10:13:33 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:21:02.903 10:13:33 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:21:03.162 Nvme0n1 00:21:03.162 10:13:33 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:21:03.419 10:13:33 -- host/fio.sh@53 -- # ls_guid=602cdac3-9227-44ea-9501-f7429f758430 00:21:03.419 10:13:33 -- host/fio.sh@54 -- # get_lvs_free_mb 602cdac3-9227-44ea-9501-f7429f758430 00:21:03.419 10:13:33 -- common/autotest_common.sh@1343 -- # local lvs_uuid=602cdac3-9227-44ea-9501-f7429f758430 00:21:03.419 10:13:33 -- common/autotest_common.sh@1344 -- # local lvs_info 00:21:03.419 10:13:33 -- common/autotest_common.sh@1345 -- # local fc 00:21:03.419 10:13:33 -- common/autotest_common.sh@1346 -- # local cs 00:21:03.419 10:13:33 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:03.677 10:13:34 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:21:03.677 { 00:21:03.677 "base_bdev": "Nvme0n1", 00:21:03.677 "block_size": 4096, 00:21:03.677 "cluster_size": 1073741824, 00:21:03.677 "free_clusters": 4, 00:21:03.677 "name": "lvs_0", 00:21:03.677 "total_data_clusters": 4, 00:21:03.677 "uuid": "602cdac3-9227-44ea-9501-f7429f758430" 00:21:03.677 } 00:21:03.677 ]' 00:21:03.677 10:13:34 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="602cdac3-9227-44ea-9501-f7429f758430") .free_clusters' 00:21:03.677 10:13:34 -- common/autotest_common.sh@1348 -- # fc=4 00:21:03.677 10:13:34 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="602cdac3-9227-44ea-9501-f7429f758430") .cluster_size' 00:21:03.934 4096 00:21:03.934 10:13:34 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:21:03.934 10:13:34 -- common/autotest_common.sh@1352 -- # free_mb=4096 00:21:03.934 10:13:34 -- common/autotest_common.sh@1353 -- # echo 4096 00:21:03.934 10:13:34 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:21:03.934 52e65dc3-63b4-4e6b-a312-2d37202bfddf 00:21:03.934 10:13:34 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:21:04.191 10:13:34 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:21:04.449 10:13:34 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:04.706 10:13:35 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:04.706 10:13:35 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:04.706 10:13:35 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:21:04.706 10:13:35 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:04.706 10:13:35 -- common/autotest_common.sh@1318 -- # local sanitizers 00:21:04.706 10:13:35 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:04.706 10:13:35 -- common/autotest_common.sh@1320 -- # shift 00:21:04.706 10:13:35 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:21:04.706 10:13:35 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:04.706 10:13:35 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:04.706 10:13:35 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:04.706 10:13:35 -- common/autotest_common.sh@1324 -- # grep libasan 00:21:04.706 10:13:35 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:04.706 10:13:35 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:04.706 10:13:35 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:04.706 10:13:35 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:04.706 10:13:35 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:21:04.706 10:13:35 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:04.706 10:13:35 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:04.706 10:13:35 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:04.706 10:13:35 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:04.706 10:13:35 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:04.706 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:04.706 fio-3.35 00:21:04.706 Starting 1 thread 00:21:07.233 00:21:07.233 test: (groupid=0, jobs=1): err= 0: pid=94350: Sun Jul 14 10:13:37 2024 00:21:07.233 read: IOPS=6722, BW=26.3MiB/s (27.5MB/s)(52.7MiB/2008msec) 00:21:07.233 slat (nsec): min=1917, max=314324, avg=2724.54, stdev=4008.33 00:21:07.233 clat (usec): min=3995, max=17079, avg=10111.49, stdev=932.87 00:21:07.233 lat (usec): min=4004, max=17081, avg=10114.21, stdev=932.71 00:21:07.233 clat percentiles (usec): 00:21:07.233 | 1.00th=[ 8094], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9372], 00:21:07.233 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:21:07.233 | 70.00th=[10552], 80.00th=[10814], 90.00th=[11338], 95.00th=[11600], 00:21:07.233 | 99.00th=[12387], 99.50th=[12649], 99.90th=[14091], 99.95th=[16909], 00:21:07.233 | 99.99th=[17171] 00:21:07.233 bw ( KiB/s): min=26067, max=27328, per=99.87%, avg=26856.75, stdev=582.96, samples=4 00:21:07.233 iops : min= 6516, max= 6832, avg=6714.00, stdev=146.08, samples=4 00:21:07.233 write: IOPS=6729, BW=26.3MiB/s (27.6MB/s)(52.8MiB/2008msec); 0 zone resets 00:21:07.233 slat (nsec): min=1968, max=252355, avg=2816.89, stdev=3027.85 00:21:07.233 clat (usec): min=2277, max=15704, avg=8840.31, stdev=797.00 00:21:07.233 lat (usec): min=2290, max=15706, avg=8843.12, stdev=796.91 00:21:07.233 clat percentiles (usec): 00:21:07.233 | 1.00th=[ 7046], 5.00th=[ 7635], 10.00th=[ 7898], 20.00th=[ 8225], 00:21:07.233 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 8979], 00:21:07.233 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[ 9765], 95.00th=[10028], 00:21:07.233 | 99.00th=[10552], 99.50th=[10814], 99.90th=[13304], 99.95th=[13829], 00:21:07.233 | 99.99th=[15664] 00:21:07.233 bw ( KiB/s): min=26672, max=27105, per=99.88%, avg=26884.25, stdev=176.84, samples=4 00:21:07.233 iops : min= 6668, max= 6776, avg=6721.00, stdev=44.11, samples=4 00:21:07.233 lat (msec) : 4=0.04%, 10=70.01%, 20=29.95% 00:21:07.233 cpu : usr=70.75%, sys=22.02%, ctx=10, majf=0, minf=5 00:21:07.233 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:07.233 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:07.233 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:07.233 issued rwts: total=13499,13512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:07.233 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:07.233 00:21:07.233 Run status group 0 (all jobs): 00:21:07.233 READ: bw=26.3MiB/s (27.5MB/s), 26.3MiB/s-26.3MiB/s (27.5MB/s-27.5MB/s), io=52.7MiB (55.3MB), run=2008-2008msec 00:21:07.233 WRITE: bw=26.3MiB/s (27.6MB/s), 26.3MiB/s-26.3MiB/s (27.6MB/s-27.6MB/s), io=52.8MiB (55.3MB), run=2008-2008msec 00:21:07.233 10:13:37 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:07.233 10:13:37 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:21:07.491 10:13:38 -- host/fio.sh@64 -- # ls_nested_guid=0cd8ca9b-2b0a-4c69-874f-2e3589af0079 00:21:07.491 10:13:38 -- host/fio.sh@65 -- # get_lvs_free_mb 0cd8ca9b-2b0a-4c69-874f-2e3589af0079 00:21:07.491 10:13:38 -- common/autotest_common.sh@1343 -- # local lvs_uuid=0cd8ca9b-2b0a-4c69-874f-2e3589af0079 00:21:07.491 10:13:38 -- common/autotest_common.sh@1344 -- # local lvs_info 00:21:07.491 10:13:38 -- common/autotest_common.sh@1345 -- # local fc 00:21:07.491 10:13:38 -- common/autotest_common.sh@1346 -- # local cs 00:21:07.491 10:13:38 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:07.749 10:13:38 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:21:07.749 { 00:21:07.749 "base_bdev": "Nvme0n1", 00:21:07.749 "block_size": 4096, 00:21:07.749 "cluster_size": 1073741824, 00:21:07.749 "free_clusters": 0, 00:21:07.749 "name": "lvs_0", 00:21:07.749 "total_data_clusters": 4, 00:21:07.749 "uuid": "602cdac3-9227-44ea-9501-f7429f758430" 00:21:07.749 }, 00:21:07.749 { 00:21:07.749 "base_bdev": "52e65dc3-63b4-4e6b-a312-2d37202bfddf", 00:21:07.749 "block_size": 4096, 00:21:07.749 "cluster_size": 4194304, 00:21:07.749 "free_clusters": 1022, 00:21:07.749 "name": "lvs_n_0", 00:21:07.749 "total_data_clusters": 1022, 00:21:07.749 "uuid": "0cd8ca9b-2b0a-4c69-874f-2e3589af0079" 00:21:07.749 } 00:21:07.749 ]' 00:21:07.749 10:13:38 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="0cd8ca9b-2b0a-4c69-874f-2e3589af0079") .free_clusters' 00:21:07.749 10:13:38 -- common/autotest_common.sh@1348 -- # fc=1022 00:21:07.749 10:13:38 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="0cd8ca9b-2b0a-4c69-874f-2e3589af0079") .cluster_size' 00:21:08.007 4088 00:21:08.007 10:13:38 -- common/autotest_common.sh@1349 -- # cs=4194304 00:21:08.007 10:13:38 -- common/autotest_common.sh@1352 -- # free_mb=4088 00:21:08.007 10:13:38 -- common/autotest_common.sh@1353 -- # echo 4088 00:21:08.007 10:13:38 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:21:08.264 1e031e88-de3b-4a9a-a72a-eaa63ece95a2 00:21:08.264 10:13:38 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:21:08.264 10:13:38 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:21:08.541 10:13:38 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:21:08.798 10:13:39 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:08.798 10:13:39 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:08.798 10:13:39 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:21:08.798 10:13:39 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:08.798 10:13:39 -- common/autotest_common.sh@1318 -- # local sanitizers 00:21:08.798 10:13:39 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:08.798 10:13:39 -- common/autotest_common.sh@1320 -- # shift 00:21:08.798 10:13:39 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:21:08.798 10:13:39 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:08.798 10:13:39 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:08.798 10:13:39 -- common/autotest_common.sh@1324 -- # grep libasan 00:21:08.798 10:13:39 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:08.798 10:13:39 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:08.798 10:13:39 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:08.798 10:13:39 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:21:08.798 10:13:39 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:08.798 10:13:39 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:21:08.798 10:13:39 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:21:08.798 10:13:39 -- common/autotest_common.sh@1324 -- # asan_lib= 00:21:08.798 10:13:39 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:21:08.798 10:13:39 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:08.798 10:13:39 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:09.056 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:09.056 fio-3.35 00:21:09.056 Starting 1 thread 00:21:11.584 00:21:11.584 test: (groupid=0, jobs=1): err= 0: pid=94465: Sun Jul 14 10:13:41 2024 00:21:11.584 read: IOPS=5970, BW=23.3MiB/s (24.5MB/s)(46.8MiB/2008msec) 00:21:11.584 slat (nsec): min=1971, max=342672, avg=3126.57, stdev=4466.75 00:21:11.584 clat (usec): min=4455, max=20242, avg=11332.78, stdev=1082.14 00:21:11.584 lat (usec): min=4465, max=20245, avg=11335.91, stdev=1081.91 00:21:11.584 clat percentiles (usec): 00:21:11.584 | 1.00th=[ 8979], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10421], 00:21:11.584 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:21:11.584 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12649], 95.00th=[13173], 00:21:11.584 | 99.00th=[14091], 99.50th=[14484], 99.90th=[17695], 99.95th=[19006], 00:21:11.584 | 99.99th=[19268] 00:21:11.584 bw ( KiB/s): min=22912, max=24232, per=99.74%, avg=23818.00, stdev=609.73, samples=4 00:21:11.584 iops : min= 5728, max= 6058, avg=5954.50, stdev=152.43, samples=4 00:21:11.584 write: IOPS=5952, BW=23.3MiB/s (24.4MB/s)(46.7MiB/2008msec); 0 zone resets 00:21:11.584 slat (usec): min=2, max=289, avg= 3.24, stdev= 3.61 00:21:11.584 clat (usec): min=2493, max=17865, avg=10060.00, stdev=912.28 00:21:11.584 lat (usec): min=2506, max=17867, avg=10063.25, stdev=912.12 00:21:11.584 clat percentiles (usec): 00:21:11.584 | 1.00th=[ 8029], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9372], 00:21:11.584 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:21:11.584 | 70.00th=[10552], 80.00th=[10683], 90.00th=[11076], 95.00th=[11469], 00:21:11.584 | 99.00th=[11994], 99.50th=[12256], 99.90th=[16450], 99.95th=[17695], 00:21:11.584 | 99.99th=[17957] 00:21:11.584 bw ( KiB/s): min=23680, max=23872, per=100.00%, avg=23810.00, stdev=90.60, samples=4 00:21:11.584 iops : min= 5920, max= 5968, avg=5952.50, stdev=22.65, samples=4 00:21:11.584 lat (msec) : 4=0.04%, 10=27.54%, 20=72.42%, 50=0.01% 00:21:11.584 cpu : usr=67.68%, sys=24.85%, ctx=8, majf=0, minf=5 00:21:11.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:21:11.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:11.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:11.584 issued rwts: total=11988,11953,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:11.584 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:11.584 00:21:11.584 Run status group 0 (all jobs): 00:21:11.584 READ: bw=23.3MiB/s (24.5MB/s), 23.3MiB/s-23.3MiB/s (24.5MB/s-24.5MB/s), io=46.8MiB (49.1MB), run=2008-2008msec 00:21:11.584 WRITE: bw=23.3MiB/s (24.4MB/s), 23.3MiB/s-23.3MiB/s (24.4MB/s-24.4MB/s), io=46.7MiB (49.0MB), run=2008-2008msec 00:21:11.584 10:13:41 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:21:11.584 10:13:41 -- host/fio.sh@74 -- # sync 00:21:11.584 10:13:41 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:21:11.863 10:13:42 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:12.130 10:13:42 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:21:12.388 10:13:42 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:12.388 10:13:42 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:21:13.323 10:13:43 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:13.323 10:13:43 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:13.323 10:13:43 -- host/fio.sh@86 -- # nvmftestfini 00:21:13.323 10:13:43 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:13.323 10:13:43 -- nvmf/common.sh@116 -- # sync 00:21:13.323 10:13:43 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:13.323 10:13:43 -- nvmf/common.sh@119 -- # set +e 00:21:13.323 10:13:43 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:13.323 10:13:43 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:13.323 rmmod nvme_tcp 00:21:13.323 rmmod nvme_fabrics 00:21:13.323 rmmod nvme_keyring 00:21:13.323 10:13:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:13.323 10:13:43 -- nvmf/common.sh@123 -- # set -e 00:21:13.323 10:13:43 -- nvmf/common.sh@124 -- # return 0 00:21:13.323 10:13:43 -- nvmf/common.sh@477 -- # '[' -n 94024 ']' 00:21:13.323 10:13:43 -- nvmf/common.sh@478 -- # killprocess 94024 00:21:13.323 10:13:43 -- common/autotest_common.sh@926 -- # '[' -z 94024 ']' 00:21:13.323 10:13:43 -- common/autotest_common.sh@930 -- # kill -0 94024 00:21:13.323 10:13:43 -- common/autotest_common.sh@931 -- # uname 00:21:13.323 10:13:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:13.323 10:13:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 94024 00:21:13.581 killing process with pid 94024 00:21:13.581 10:13:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:13.581 10:13:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:13.581 10:13:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 94024' 00:21:13.581 10:13:43 -- common/autotest_common.sh@945 -- # kill 94024 00:21:13.581 10:13:43 -- common/autotest_common.sh@950 -- # wait 94024 00:21:13.581 10:13:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:13.581 10:13:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:13.581 10:13:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:13.581 10:13:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:13.581 10:13:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:13.581 10:13:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.581 10:13:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:13.581 10:13:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.839 10:13:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:13.839 00:21:13.839 real 0m18.765s 00:21:13.839 user 1m22.400s 00:21:13.839 sys 0m4.349s 00:21:13.839 10:13:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:13.839 10:13:44 -- common/autotest_common.sh@10 -- # set +x 00:21:13.839 ************************************ 00:21:13.839 END TEST nvmf_fio_host 00:21:13.839 ************************************ 00:21:13.839 10:13:44 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:13.839 10:13:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:13.839 10:13:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:13.839 10:13:44 -- common/autotest_common.sh@10 -- # set +x 00:21:13.839 ************************************ 00:21:13.839 START TEST nvmf_failover 00:21:13.839 ************************************ 00:21:13.839 10:13:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:13.839 * Looking for test storage... 00:21:13.839 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:13.839 10:13:44 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:13.839 10:13:44 -- nvmf/common.sh@7 -- # uname -s 00:21:13.839 10:13:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:13.839 10:13:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:13.839 10:13:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:13.839 10:13:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:13.839 10:13:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:13.839 10:13:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:13.839 10:13:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:13.839 10:13:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:13.839 10:13:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:13.839 10:13:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:13.839 10:13:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:21:13.839 10:13:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:21:13.839 10:13:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:13.839 10:13:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:13.839 10:13:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:13.839 10:13:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:13.839 10:13:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:13.839 10:13:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:13.839 10:13:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:13.839 10:13:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.839 10:13:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.839 10:13:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.839 10:13:44 -- paths/export.sh@5 -- # export PATH 00:21:13.840 10:13:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.840 10:13:44 -- nvmf/common.sh@46 -- # : 0 00:21:13.840 10:13:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:13.840 10:13:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:13.840 10:13:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:13.840 10:13:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:13.840 10:13:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:13.840 10:13:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:13.840 10:13:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:13.840 10:13:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:13.840 10:13:44 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:13.840 10:13:44 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:13.840 10:13:44 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:13.840 10:13:44 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:13.840 10:13:44 -- host/failover.sh@18 -- # nvmftestinit 00:21:13.840 10:13:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:13.840 10:13:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:13.840 10:13:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:13.840 10:13:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:13.840 10:13:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:13.840 10:13:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.840 10:13:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:13.840 10:13:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.840 10:13:44 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:13.840 10:13:44 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:13.840 10:13:44 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:13.840 10:13:44 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:13.840 10:13:44 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:13.840 10:13:44 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:13.840 10:13:44 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:13.840 10:13:44 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:13.840 10:13:44 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:13.840 10:13:44 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:13.840 10:13:44 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:13.840 10:13:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:13.840 10:13:44 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:13.840 10:13:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:13.840 10:13:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:13.840 10:13:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:13.840 10:13:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:13.840 10:13:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:13.840 10:13:44 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:13.840 10:13:44 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:13.840 Cannot find device "nvmf_tgt_br" 00:21:13.840 10:13:44 -- nvmf/common.sh@154 -- # true 00:21:13.840 10:13:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:13.840 Cannot find device "nvmf_tgt_br2" 00:21:13.840 10:13:44 -- nvmf/common.sh@155 -- # true 00:21:13.840 10:13:44 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:13.840 10:13:44 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:13.840 Cannot find device "nvmf_tgt_br" 00:21:13.840 10:13:44 -- nvmf/common.sh@157 -- # true 00:21:13.840 10:13:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:13.840 Cannot find device "nvmf_tgt_br2" 00:21:13.840 10:13:44 -- nvmf/common.sh@158 -- # true 00:21:13.840 10:13:44 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:14.097 10:13:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:14.097 10:13:44 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:14.097 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:14.097 10:13:44 -- nvmf/common.sh@161 -- # true 00:21:14.097 10:13:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:14.097 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:14.097 10:13:44 -- nvmf/common.sh@162 -- # true 00:21:14.097 10:13:44 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:14.097 10:13:44 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:14.097 10:13:44 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:14.097 10:13:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:14.097 10:13:44 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:14.097 10:13:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:14.097 10:13:44 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:14.097 10:13:44 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:14.097 10:13:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:14.097 10:13:44 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:14.097 10:13:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:14.097 10:13:44 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:14.097 10:13:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:14.097 10:13:44 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:14.097 10:13:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:14.097 10:13:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:14.097 10:13:44 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:14.097 10:13:44 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:14.097 10:13:44 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:14.097 10:13:44 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:14.097 10:13:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:14.097 10:13:44 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:14.097 10:13:44 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:14.097 10:13:44 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:14.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:14.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:21:14.097 00:21:14.097 --- 10.0.0.2 ping statistics --- 00:21:14.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.097 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:21:14.097 10:13:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:14.097 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:14.097 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.044 ms 00:21:14.097 00:21:14.097 --- 10.0.0.3 ping statistics --- 00:21:14.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.097 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:21:14.097 10:13:44 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:14.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:14.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:21:14.097 00:21:14.098 --- 10.0.0.1 ping statistics --- 00:21:14.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:14.098 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:21:14.098 10:13:44 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:14.098 10:13:44 -- nvmf/common.sh@421 -- # return 0 00:21:14.098 10:13:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:14.098 10:13:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:14.098 10:13:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:14.098 10:13:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:14.098 10:13:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:14.098 10:13:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:14.098 10:13:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:14.098 10:13:44 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:14.098 10:13:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:14.098 10:13:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:14.354 10:13:44 -- common/autotest_common.sh@10 -- # set +x 00:21:14.354 10:13:44 -- nvmf/common.sh@469 -- # nvmfpid=94731 00:21:14.354 10:13:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:14.354 10:13:44 -- nvmf/common.sh@470 -- # waitforlisten 94731 00:21:14.354 10:13:44 -- common/autotest_common.sh@819 -- # '[' -z 94731 ']' 00:21:14.354 10:13:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:14.354 10:13:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:14.354 10:13:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:14.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:14.354 10:13:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:14.354 10:13:44 -- common/autotest_common.sh@10 -- # set +x 00:21:14.354 [2024-07-14 10:13:44.704537] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:14.354 [2024-07-14 10:13:44.704641] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:14.354 [2024-07-14 10:13:44.844637] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:14.611 [2024-07-14 10:13:44.940538] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:14.611 [2024-07-14 10:13:44.940765] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:14.611 [2024-07-14 10:13:44.940779] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:14.611 [2024-07-14 10:13:44.940788] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:14.611 [2024-07-14 10:13:44.941338] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:14.611 [2024-07-14 10:13:44.941576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:14.611 [2024-07-14 10:13:44.941583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.175 10:13:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:15.175 10:13:45 -- common/autotest_common.sh@852 -- # return 0 00:21:15.175 10:13:45 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:15.175 10:13:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:15.175 10:13:45 -- common/autotest_common.sh@10 -- # set +x 00:21:15.433 10:13:45 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:15.433 10:13:45 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:15.690 [2024-07-14 10:13:46.015005] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:15.690 10:13:46 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:15.948 Malloc0 00:21:15.948 10:13:46 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:16.205 10:13:46 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:16.205 10:13:46 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:16.463 [2024-07-14 10:13:46.986043] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.463 10:13:47 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:16.721 [2024-07-14 10:13:47.198205] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:16.721 10:13:47 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:16.979 [2024-07-14 10:13:47.402440] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:16.979 10:13:47 -- host/failover.sh@31 -- # bdevperf_pid=94847 00:21:16.979 10:13:47 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:16.979 10:13:47 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:16.979 10:13:47 -- host/failover.sh@34 -- # waitforlisten 94847 /var/tmp/bdevperf.sock 00:21:16.979 10:13:47 -- common/autotest_common.sh@819 -- # '[' -z 94847 ']' 00:21:16.979 10:13:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:16.979 10:13:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:16.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:16.979 10:13:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:16.979 10:13:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:16.979 10:13:47 -- common/autotest_common.sh@10 -- # set +x 00:21:17.913 10:13:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:17.913 10:13:48 -- common/autotest_common.sh@852 -- # return 0 00:21:17.913 10:13:48 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:18.171 NVMe0n1 00:21:18.171 10:13:48 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:18.429 00:21:18.429 10:13:48 -- host/failover.sh@39 -- # run_test_pid=94896 00:21:18.429 10:13:48 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:18.429 10:13:48 -- host/failover.sh@41 -- # sleep 1 00:21:19.806 10:13:49 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:19.806 [2024-07-14 10:13:50.118483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118547] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118559] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118583] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118607] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118615] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118623] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118631] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118638] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118646] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118655] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118663] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118670] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118678] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118685] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118693] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118701] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118731] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118739] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118747] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118754] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118761] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118769] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118807] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118853] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118868] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118877] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118885] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118894] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118902] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118952] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118960] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.806 [2024-07-14 10:13:50.118968] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.807 [2024-07-14 10:13:50.118977] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.807 [2024-07-14 10:13:50.118985] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.807 [2024-07-14 10:13:50.118994] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.807 [2024-07-14 10:13:50.119002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.807 [2024-07-14 10:13:50.119010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.807 [2024-07-14 10:13:50.119018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.807 [2024-07-14 10:13:50.119026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.807 [2024-07-14 10:13:50.119034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.807 [2024-07-14 10:13:50.119042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.807 [2024-07-14 10:13:50.119049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1377800 is same with the state(5) to be set 00:21:19.807 10:13:50 -- host/failover.sh@45 -- # sleep 3 00:21:23.094 10:13:53 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:23.094 00:21:23.094 10:13:53 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:23.352 [2024-07-14 10:13:53.676726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.676789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.676815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.676824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.676832] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.676840] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.676848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.676856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.676863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.676871] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.676880] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.676888] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.676896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.676903] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.676910] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.676918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.676926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.676933] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.676941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.676948] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.676956] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.676964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.676971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.676978] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.676986] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.676993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.677001] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.677008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.677015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.677022] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.677030] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.677037] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.677045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.677055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.677063] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.677087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.677111] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.677120] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.677128] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.677136] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.677145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.677154] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.677163] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.677173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.677181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.677190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.677198] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.677206] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.677214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.677223] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.677231] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.677240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.677261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.677271] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.677279] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 [2024-07-14 10:13:53.677288] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1378ef0 is same with the state(5) to be set 00:21:23.352 10:13:53 -- host/failover.sh@50 -- # sleep 3 00:21:26.633 10:13:56 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:26.633 [2024-07-14 10:13:56.939293] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:26.633 10:13:56 -- host/failover.sh@55 -- # sleep 1 00:21:27.568 10:13:57 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:27.827 [2024-07-14 10:13:58.178889] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13795d0 is same with the state(5) to be set 00:21:27.827 [2024-07-14 10:13:58.178945] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13795d0 is same with the state(5) to be set 00:21:27.827 [2024-07-14 10:13:58.178957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13795d0 is same with the state(5) to be set 00:21:27.827 [2024-07-14 10:13:58.178965] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13795d0 is same with the state(5) to be set 00:21:27.827 [2024-07-14 10:13:58.178974] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13795d0 is same with the state(5) to be set 00:21:27.827 [2024-07-14 10:13:58.178982] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13795d0 is same with the state(5) to be set 00:21:27.827 [2024-07-14 10:13:58.178991] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13795d0 is same with the state(5) to be set 00:21:27.827 [2024-07-14 10:13:58.179000] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13795d0 is same with the state(5) to be set 00:21:27.827 [2024-07-14 10:13:58.179008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13795d0 is same with the state(5) to be set 00:21:27.827 [2024-07-14 10:13:58.179016] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13795d0 is same with the state(5) to be set 00:21:27.827 [2024-07-14 10:13:58.179024] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13795d0 is same with the state(5) to be set 00:21:27.827 [2024-07-14 10:13:58.179032] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13795d0 is same with the state(5) to be set 00:21:27.827 [2024-07-14 10:13:58.179040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13795d0 is same with the state(5) to be set 00:21:27.827 [2024-07-14 10:13:58.179049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13795d0 is same with the state(5) to be set 00:21:27.827 [2024-07-14 10:13:58.179057] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13795d0 is same with the state(5) to be set 00:21:27.827 [2024-07-14 10:13:58.179065] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13795d0 is same with the state(5) to be set 00:21:27.827 [2024-07-14 10:13:58.179073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13795d0 is same with the state(5) to be set 00:21:27.827 [2024-07-14 10:13:58.179081] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13795d0 is same with the state(5) to be set 00:21:27.827 [2024-07-14 10:13:58.179089] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13795d0 is same with the state(5) to be set 00:21:27.827 [2024-07-14 10:13:58.179097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13795d0 is same with the state(5) to be set 00:21:27.827 [2024-07-14 10:13:58.179105] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13795d0 is same with the state(5) to be set 00:21:27.827 [2024-07-14 10:13:58.179113] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13795d0 is same with the state(5) to be set 00:21:27.827 [2024-07-14 10:13:58.179121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13795d0 is same with the state(5) to be set 00:21:27.827 [2024-07-14 10:13:58.179129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13795d0 is same with the state(5) to be set 00:21:27.827 [2024-07-14 10:13:58.179137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13795d0 is same with the state(5) to be set 00:21:27.827 [2024-07-14 10:13:58.179145] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13795d0 is same with the state(5) to be set 00:21:27.827 [2024-07-14 10:13:58.179153] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13795d0 is same with the state(5) to be set 00:21:27.827 [2024-07-14 10:13:58.179161] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13795d0 is same with the state(5) to be set 00:21:27.827 [2024-07-14 10:13:58.179170] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13795d0 is same with the state(5) to be set 00:21:27.827 [2024-07-14 10:13:58.179178] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13795d0 is same with the state(5) to be set 00:21:27.827 10:13:58 -- host/failover.sh@59 -- # wait 94896 00:21:34.382 0 00:21:34.382 10:14:04 -- host/failover.sh@61 -- # killprocess 94847 00:21:34.382 10:14:04 -- common/autotest_common.sh@926 -- # '[' -z 94847 ']' 00:21:34.382 10:14:04 -- common/autotest_common.sh@930 -- # kill -0 94847 00:21:34.382 10:14:04 -- common/autotest_common.sh@931 -- # uname 00:21:34.382 10:14:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:34.382 10:14:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 94847 00:21:34.382 killing process with pid 94847 00:21:34.382 10:14:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:34.382 10:14:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:34.382 10:14:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 94847' 00:21:34.382 10:14:04 -- common/autotest_common.sh@945 -- # kill 94847 00:21:34.382 10:14:04 -- common/autotest_common.sh@950 -- # wait 94847 00:21:34.382 10:14:04 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:34.382 [2024-07-14 10:13:47.459610] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:34.382 [2024-07-14 10:13:47.459716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid94847 ] 00:21:34.382 [2024-07-14 10:13:47.596796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.382 [2024-07-14 10:13:47.673655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.382 Running I/O for 15 seconds... 00:21:34.382 [2024-07-14 10:13:50.119441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:128312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.382 [2024-07-14 10:13:50.119483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.382 [2024-07-14 10:13:50.119511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:128320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.382 [2024-07-14 10:13:50.119527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.382 [2024-07-14 10:13:50.119543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:128328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.382 [2024-07-14 10:13:50.119558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.382 [2024-07-14 10:13:50.119574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:128360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.382 [2024-07-14 10:13:50.119588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.382 [2024-07-14 10:13:50.119605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:128368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.382 [2024-07-14 10:13:50.119618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.382 [2024-07-14 10:13:50.119635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:128376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.382 [2024-07-14 10:13:50.119648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.119664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:128384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.383 [2024-07-14 10:13:50.119677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.119708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:128400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.383 [2024-07-14 10:13:50.119721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.119750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:128424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.383 [2024-07-14 10:13:50.119763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.119778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:128448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.383 [2024-07-14 10:13:50.119791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.119806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:128456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.383 [2024-07-14 10:13:50.119819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.119854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:128464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.383 [2024-07-14 10:13:50.119869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.119884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:127904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.383 [2024-07-14 10:13:50.119897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.119912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:127912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.383 [2024-07-14 10:13:50.119925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.119939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:127920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.383 [2024-07-14 10:13:50.119952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.119966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:127928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.383 [2024-07-14 10:13:50.119979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.119993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:127936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.383 [2024-07-14 10:13:50.120006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.120021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:127952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.383 [2024-07-14 10:13:50.120034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.120048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:127984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.383 [2024-07-14 10:13:50.120061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.120075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:127992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.383 [2024-07-14 10:13:50.120088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.120104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:128520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.383 [2024-07-14 10:13:50.120116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.120131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:128528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.383 [2024-07-14 10:13:50.120144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.120158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:128544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.383 [2024-07-14 10:13:50.120171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.120186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:128552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.383 [2024-07-14 10:13:50.120205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.120238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.383 [2024-07-14 10:13:50.120266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.120283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:128568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.383 [2024-07-14 10:13:50.120297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.120313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:128576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.383 [2024-07-14 10:13:50.120332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.120349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:128584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.383 [2024-07-14 10:13:50.120362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.120377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:128592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.383 [2024-07-14 10:13:50.120391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.120406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:128600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.383 [2024-07-14 10:13:50.120420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.120435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:128608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.383 [2024-07-14 10:13:50.120448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.120464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:128616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.383 [2024-07-14 10:13:50.120477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.120493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:128624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.383 [2024-07-14 10:13:50.120506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.120521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:128632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.383 [2024-07-14 10:13:50.120535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.120551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:128640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.383 [2024-07-14 10:13:50.120564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.120580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:128648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.383 [2024-07-14 10:13:50.120593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.120616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:128656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.383 [2024-07-14 10:13:50.120631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.120647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:128664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.383 [2024-07-14 10:13:50.120660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.120676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:128672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.383 [2024-07-14 10:13:50.120690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.383 [2024-07-14 10:13:50.120705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:128680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.384 [2024-07-14 10:13:50.120718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.120734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:128688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.384 [2024-07-14 10:13:50.120748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.120763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:128696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.384 [2024-07-14 10:13:50.120777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.120792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:128704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.384 [2024-07-14 10:13:50.120810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.120826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:128712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.384 [2024-07-14 10:13:50.120840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.120855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:128720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.384 [2024-07-14 10:13:50.120868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.120884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.384 [2024-07-14 10:13:50.120898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.120913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:128016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.384 [2024-07-14 10:13:50.120927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.120942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:128024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.384 [2024-07-14 10:13:50.120956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.120971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:128048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.384 [2024-07-14 10:13:50.120993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.121010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:128080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.384 [2024-07-14 10:13:50.121024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.121039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:128088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.384 [2024-07-14 10:13:50.121053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.121068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:128096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.384 [2024-07-14 10:13:50.121082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.121097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:128120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.384 [2024-07-14 10:13:50.121111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.121126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:128128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.384 [2024-07-14 10:13:50.121140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.121155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:128736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.384 [2024-07-14 10:13:50.121169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.121184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:128744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.384 [2024-07-14 10:13:50.121198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.121226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:128752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.384 [2024-07-14 10:13:50.121241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.121256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:128760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.384 [2024-07-14 10:13:50.121270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.121286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:128768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.384 [2024-07-14 10:13:50.121304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.121320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:128776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.384 [2024-07-14 10:13:50.121334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.121349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:128136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.384 [2024-07-14 10:13:50.121363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.121386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:128152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.384 [2024-07-14 10:13:50.121401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.121417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:128168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.384 [2024-07-14 10:13:50.121430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.121446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:128176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.384 [2024-07-14 10:13:50.121459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.121474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:128208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.384 [2024-07-14 10:13:50.121488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.121503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:128240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.384 [2024-07-14 10:13:50.121517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.121533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:128248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.384 [2024-07-14 10:13:50.121547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.121562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:128296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.384 [2024-07-14 10:13:50.121576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.121592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:128784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.384 [2024-07-14 10:13:50.121605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.121620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:128792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.384 [2024-07-14 10:13:50.121634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.121649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:128800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.384 [2024-07-14 10:13:50.121663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.121678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:128808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.384 [2024-07-14 10:13:50.121691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.384 [2024-07-14 10:13:50.121707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:128816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.384 [2024-07-14 10:13:50.121720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.121736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:128824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.385 [2024-07-14 10:13:50.121749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.121770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:128832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.385 [2024-07-14 10:13:50.121790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.121807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:128840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.385 [2024-07-14 10:13:50.121820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.121836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:128848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.385 [2024-07-14 10:13:50.121849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.121865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:128856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.385 [2024-07-14 10:13:50.121878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.121894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:128864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.385 [2024-07-14 10:13:50.121907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.121923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:128872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.385 [2024-07-14 10:13:50.121936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.121951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:128880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.385 [2024-07-14 10:13:50.121965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.121981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:128888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.385 [2024-07-14 10:13:50.121994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.122009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:128896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.385 [2024-07-14 10:13:50.122023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.122038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.385 [2024-07-14 10:13:50.122052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.122067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:128912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.385 [2024-07-14 10:13:50.122081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.122096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.385 [2024-07-14 10:13:50.122109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.122125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:128928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.385 [2024-07-14 10:13:50.122144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.122160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:128936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.385 [2024-07-14 10:13:50.122174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.122189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:128944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.385 [2024-07-14 10:13:50.122215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.122233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:128952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.385 [2024-07-14 10:13:50.122247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.122263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:128960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.385 [2024-07-14 10:13:50.122281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.122297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:128304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.385 [2024-07-14 10:13:50.122311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.122326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:128336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.385 [2024-07-14 10:13:50.122339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.122355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:128344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.385 [2024-07-14 10:13:50.122368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.122384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:128352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.385 [2024-07-14 10:13:50.122397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.122413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:128392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.385 [2024-07-14 10:13:50.122431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.122446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:128408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.385 [2024-07-14 10:13:50.122460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.122476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:128416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.385 [2024-07-14 10:13:50.122489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.122504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.385 [2024-07-14 10:13:50.122517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.122540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:128968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.385 [2024-07-14 10:13:50.122554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.122570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:128976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.385 [2024-07-14 10:13:50.122583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.122598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:128984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.385 [2024-07-14 10:13:50.122611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.122627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:128992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.385 [2024-07-14 10:13:50.122640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.122655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:129000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.385 [2024-07-14 10:13:50.122668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.122684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.385 [2024-07-14 10:13:50.122697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.122713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:129016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.385 [2024-07-14 10:13:50.122726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.385 [2024-07-14 10:13:50.122741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:129024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.385 [2024-07-14 10:13:50.122758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.386 [2024-07-14 10:13:50.122774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:129032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.386 [2024-07-14 10:13:50.122788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.386 [2024-07-14 10:13:50.122803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:129040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.386 [2024-07-14 10:13:50.122845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.386 [2024-07-14 10:13:50.122866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:129048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.386 [2024-07-14 10:13:50.122880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.386 [2024-07-14 10:13:50.122895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:129056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.386 [2024-07-14 10:13:50.122909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.386 [2024-07-14 10:13:50.122924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:129064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.386 [2024-07-14 10:13:50.122950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.386 [2024-07-14 10:13:50.122967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:129072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.386 [2024-07-14 10:13:50.122982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.386 [2024-07-14 10:13:50.122997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.386 [2024-07-14 10:13:50.123011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.386 [2024-07-14 10:13:50.123026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.386 [2024-07-14 10:13:50.123040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.386 [2024-07-14 10:13:50.123055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:129096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.386 [2024-07-14 10:13:50.123069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.386 [2024-07-14 10:13:50.123085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:129104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.386 [2024-07-14 10:13:50.123098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.386 [2024-07-14 10:13:50.123114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.386 [2024-07-14 10:13:50.123128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.386 [2024-07-14 10:13:50.123143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:129120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.386 [2024-07-14 10:13:50.123157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.386 [2024-07-14 10:13:50.123172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:129128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.386 [2024-07-14 10:13:50.123186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.386 [2024-07-14 10:13:50.123212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:128440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.386 [2024-07-14 10:13:50.123228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.386 [2024-07-14 10:13:50.123243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:128472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.386 [2024-07-14 10:13:50.123257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.386 [2024-07-14 10:13:50.123272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:128480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.386 [2024-07-14 10:13:50.123291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.386 [2024-07-14 10:13:50.123307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:128488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.386 [2024-07-14 10:13:50.123320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.386 [2024-07-14 10:13:50.123345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:128496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.386 [2024-07-14 10:13:50.123360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.386 [2024-07-14 10:13:50.123377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:128504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.386 [2024-07-14 10:13:50.123390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.386 [2024-07-14 10:13:50.123406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:128512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.386 [2024-07-14 10:13:50.123419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.386 [2024-07-14 10:13:50.123434] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1adbf70 is same with the state(5) to be set 00:21:34.386 [2024-07-14 10:13:50.123456] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.386 [2024-07-14 10:13:50.123467] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.386 [2024-07-14 10:13:50.123478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:128536 len:8 PRP1 0x0 PRP2 0x0 00:21:34.386 [2024-07-14 10:13:50.123492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.386 [2024-07-14 10:13:50.123550] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1adbf70 was disconnected and freed. reset controller. 00:21:34.386 [2024-07-14 10:13:50.123569] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:34.386 [2024-07-14 10:13:50.123623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.386 [2024-07-14 10:13:50.123644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.386 [2024-07-14 10:13:50.123660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.386 [2024-07-14 10:13:50.123673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.386 [2024-07-14 10:13:50.123688] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.386 [2024-07-14 10:13:50.123702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.386 [2024-07-14 10:13:50.123716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.386 [2024-07-14 10:13:50.123729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.386 [2024-07-14 10:13:50.123744] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.386 [2024-07-14 10:13:50.123786] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abcf20 (9): Bad file descriptor 00:21:34.386 [2024-07-14 10:13:50.126319] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.386 [2024-07-14 10:13:50.159557] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:34.386 [2024-07-14 10:13:53.677085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.386 [2024-07-14 10:13:53.677154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.386 [2024-07-14 10:13:53.677192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.386 [2024-07-14 10:13:53.677210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.386 [2024-07-14 10:13:53.677225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.386 [2024-07-14 10:13:53.677239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.386 [2024-07-14 10:13:53.677254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.386 [2024-07-14 10:13:53.677268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.386 [2024-07-14 10:13:53.677282] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1abcf20 is same with the state(5) to be set 00:21:34.387 [2024-07-14 10:13:53.677368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.387 [2024-07-14 10:13:53.677393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.387 [2024-07-14 10:13:53.677417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:22616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.387 [2024-07-14 10:13:53.677434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.387 [2024-07-14 10:13:53.677450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:22632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.387 [2024-07-14 10:13:53.677464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.387 [2024-07-14 10:13:53.677480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:22656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.387 [2024-07-14 10:13:53.677494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.387 [2024-07-14 10:13:53.677509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.387 [2024-07-14 10:13:53.677523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.387 [2024-07-14 10:13:53.677539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:22680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.387 [2024-07-14 10:13:53.677552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.387 [2024-07-14 10:13:53.677568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.387 [2024-07-14 10:13:53.677581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.387 [2024-07-14 10:13:53.677596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.387 [2024-07-14 10:13:53.677610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.387 [2024-07-14 10:13:53.677626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.387 [2024-07-14 10:13:53.677639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.387 [2024-07-14 10:13:53.677655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:22728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.387 [2024-07-14 10:13:53.677679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.387 [2024-07-14 10:13:53.677696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.387 [2024-07-14 10:13:53.677710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.387 [2024-07-14 10:13:53.677726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.387 [2024-07-14 10:13:53.677739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.387 [2024-07-14 10:13:53.677758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:21992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.387 [2024-07-14 10:13:53.677772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.387 [2024-07-14 10:13:53.677788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.387 [2024-07-14 10:13:53.677802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.387 [2024-07-14 10:13:53.677818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.387 [2024-07-14 10:13:53.677831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.387 [2024-07-14 10:13:53.677848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.387 [2024-07-14 10:13:53.677861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.387 [2024-07-14 10:13:53.677877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.387 [2024-07-14 10:13:53.677890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.387 [2024-07-14 10:13:53.677906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:22064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.387 [2024-07-14 10:13:53.677921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.387 [2024-07-14 10:13:53.677936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:22072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.387 [2024-07-14 10:13:53.677950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.387 [2024-07-14 10:13:53.677965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.387 [2024-07-14 10:13:53.677990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.387 [2024-07-14 10:13:53.678006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.387 [2024-07-14 10:13:53.678033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.387 [2024-07-14 10:13:53.678048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.387 [2024-07-14 10:13:53.678061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.387 [2024-07-14 10:13:53.678083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.387 [2024-07-14 10:13:53.678098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.387 [2024-07-14 10:13:53.678113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:22176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.387 [2024-07-14 10:13:53.678125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.387 [2024-07-14 10:13:53.678140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:22200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.387 [2024-07-14 10:13:53.678153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.387 [2024-07-14 10:13:53.678168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:22216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.387 [2024-07-14 10:13:53.678180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.387 [2024-07-14 10:13:53.678195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:22240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.387 [2024-07-14 10:13:53.678219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.387 [2024-07-14 10:13:53.678265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.388 [2024-07-14 10:13:53.678283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.388 [2024-07-14 10:13:53.678306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.388 [2024-07-14 10:13:53.678321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.388 [2024-07-14 10:13:53.678336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.388 [2024-07-14 10:13:53.678350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.388 [2024-07-14 10:13:53.678366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:22776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.388 [2024-07-14 10:13:53.678379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.388 [2024-07-14 10:13:53.678395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.388 [2024-07-14 10:13:53.678408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.388 [2024-07-14 10:13:53.678425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.388 [2024-07-14 10:13:53.678438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.388 [2024-07-14 10:13:53.678454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.388 [2024-07-14 10:13:53.678467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.388 [2024-07-14 10:13:53.678483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.388 [2024-07-14 10:13:53.678504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.388 [2024-07-14 10:13:53.678521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.388 [2024-07-14 10:13:53.678534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.388 [2024-07-14 10:13:53.678550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.388 [2024-07-14 10:13:53.678564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.388 [2024-07-14 10:13:53.678579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.388 [2024-07-14 10:13:53.678607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.388 [2024-07-14 10:13:53.678637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:22840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.388 [2024-07-14 10:13:53.678650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.388 [2024-07-14 10:13:53.678664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.388 [2024-07-14 10:13:53.678677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.388 [2024-07-14 10:13:53.678692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:22856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.388 [2024-07-14 10:13:53.678705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.388 [2024-07-14 10:13:53.678719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.388 [2024-07-14 10:13:53.678732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.388 [2024-07-14 10:13:53.678747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:22872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.388 [2024-07-14 10:13:53.678760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.388 [2024-07-14 10:13:53.678774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.388 [2024-07-14 10:13:53.678787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.388 [2024-07-14 10:13:53.678807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:22888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.388 [2024-07-14 10:13:53.678847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.388 [2024-07-14 10:13:53.678865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.388 [2024-07-14 10:13:53.678879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.388 [2024-07-14 10:13:53.678894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.388 [2024-07-14 10:13:53.678907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.388 [2024-07-14 10:13:53.678931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.388 [2024-07-14 10:13:53.678946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.388 [2024-07-14 10:13:53.678961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:22920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.388 [2024-07-14 10:13:53.678975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.388 [2024-07-14 10:13:53.678990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.388 [2024-07-14 10:13:53.679003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.388 [2024-07-14 10:13:53.679018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:22936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.388 [2024-07-14 10:13:53.679031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.388 [2024-07-14 10:13:53.679046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.388 [2024-07-14 10:13:53.679059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.388 [2024-07-14 10:13:53.679074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.388 [2024-07-14 10:13:53.679087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.388 [2024-07-14 10:13:53.679102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:22960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.388 [2024-07-14 10:13:53.679115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.388 [2024-07-14 10:13:53.679130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:22968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.388 [2024-07-14 10:13:53.679143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.388 [2024-07-14 10:13:53.679175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.388 [2024-07-14 10:13:53.679188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.388 [2024-07-14 10:13:53.679202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:22984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.389 [2024-07-14 10:13:53.679227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.679270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:22992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.389 [2024-07-14 10:13:53.679286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.679302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.389 [2024-07-14 10:13:53.679315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.679332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.389 [2024-07-14 10:13:53.679345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.679375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.389 [2024-07-14 10:13:53.679390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.679406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:23024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.389 [2024-07-14 10:13:53.679419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.679435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.389 [2024-07-14 10:13:53.679449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.679465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:23040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.389 [2024-07-14 10:13:53.679479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.679494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.389 [2024-07-14 10:13:53.679508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.679523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.389 [2024-07-14 10:13:53.679537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.679552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.389 [2024-07-14 10:13:53.679566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.679581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.389 [2024-07-14 10:13:53.679623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.679638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.389 [2024-07-14 10:13:53.679651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.679665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.389 [2024-07-14 10:13:53.679678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.679693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:22336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.389 [2024-07-14 10:13:53.679705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.679720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:22352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.389 [2024-07-14 10:13:53.679733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.679748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.389 [2024-07-14 10:13:53.679767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.679783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:22416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.389 [2024-07-14 10:13:53.679796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.679810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:22432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.389 [2024-07-14 10:13:53.679823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.679838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.389 [2024-07-14 10:13:53.679851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.679871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.389 [2024-07-14 10:13:53.679884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.679898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.389 [2024-07-14 10:13:53.679911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.679926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.389 [2024-07-14 10:13:53.679939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.679953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.389 [2024-07-14 10:13:53.679966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.679980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:22544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.389 [2024-07-14 10:13:53.679993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.680008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.389 [2024-07-14 10:13:53.680020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.680035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.389 [2024-07-14 10:13:53.680048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.680062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:23072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.389 [2024-07-14 10:13:53.680075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.680089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.389 [2024-07-14 10:13:53.680102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.680122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.389 [2024-07-14 10:13:53.680136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.680151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.389 [2024-07-14 10:13:53.680163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.680178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:23104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.389 [2024-07-14 10:13:53.680190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.680205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.389 [2024-07-14 10:13:53.680218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.389 [2024-07-14 10:13:53.680241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:23120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.390 [2024-07-14 10:13:53.680255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.680271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:23128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.390 [2024-07-14 10:13:53.680284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.680299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:23136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.390 [2024-07-14 10:13:53.680311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.680332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.390 [2024-07-14 10:13:53.680345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.680360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.390 [2024-07-14 10:13:53.680372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.680387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:23160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.390 [2024-07-14 10:13:53.680399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.680414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:23168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.390 [2024-07-14 10:13:53.680442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.680458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.390 [2024-07-14 10:13:53.680471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.680486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:23184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.390 [2024-07-14 10:13:53.680506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.680521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:23192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.390 [2024-07-14 10:13:53.680552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.680568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:23200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.390 [2024-07-14 10:13:53.680581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.680597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.390 [2024-07-14 10:13:53.680610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.680625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.390 [2024-07-14 10:13:53.680639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.680654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:23224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.390 [2024-07-14 10:13:53.680667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.680683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:23232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.390 [2024-07-14 10:13:53.680696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.680712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.390 [2024-07-14 10:13:53.680725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.680740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:23248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.390 [2024-07-14 10:13:53.680754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.680769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.390 [2024-07-14 10:13:53.680783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.680798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:23264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.390 [2024-07-14 10:13:53.680818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.680838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:23272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.390 [2024-07-14 10:13:53.680852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.680868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:23280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.390 [2024-07-14 10:13:53.680881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.680903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.390 [2024-07-14 10:13:53.680918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.680934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.390 [2024-07-14 10:13:53.680947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.680963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.390 [2024-07-14 10:13:53.680977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.680992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.390 [2024-07-14 10:13:53.681005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.681021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.390 [2024-07-14 10:13:53.681035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.681050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.390 [2024-07-14 10:13:53.681064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.681080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:23336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.390 [2024-07-14 10:13:53.681093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.681109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:23344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.390 [2024-07-14 10:13:53.681122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.681138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:23352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.390 [2024-07-14 10:13:53.681151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.681166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.390 [2024-07-14 10:13:53.681180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.681195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.390 [2024-07-14 10:13:53.681209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.681224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.390 [2024-07-14 10:13:53.681250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.681266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:22640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.390 [2024-07-14 10:13:53.681280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.681303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.390 [2024-07-14 10:13:53.681323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.390 [2024-07-14 10:13:53.681343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-07-14 10:13:53.681357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.391 [2024-07-14 10:13:53.681372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-07-14 10:13:53.681386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.391 [2024-07-14 10:13:53.681401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-07-14 10:13:53.681415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.391 [2024-07-14 10:13:53.681430] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac9390 is same with the state(5) to be set 00:21:34.391 [2024-07-14 10:13:53.681446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.391 [2024-07-14 10:13:53.681457] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.391 [2024-07-14 10:13:53.681468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22736 len:8 PRP1 0x0 PRP2 0x0 00:21:34.391 [2024-07-14 10:13:53.681482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.391 [2024-07-14 10:13:53.681539] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ac9390 was disconnected and freed. reset controller. 00:21:34.391 [2024-07-14 10:13:53.681557] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:34.391 [2024-07-14 10:13:53.681573] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.391 [2024-07-14 10:13:53.683948] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.391 [2024-07-14 10:13:53.683985] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abcf20 (9): Bad file descriptor 00:21:34.391 [2024-07-14 10:13:53.712517] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:34.391 [2024-07-14 10:13:58.179307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:20352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-07-14 10:13:58.179433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.391 [2024-07-14 10:13:58.179463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-07-14 10:13:58.179479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.391 [2024-07-14 10:13:58.179495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-07-14 10:13:58.179510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.391 [2024-07-14 10:13:58.179526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:20408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-07-14 10:13:58.179540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.391 [2024-07-14 10:13:58.179577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-07-14 10:13:58.179592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.391 [2024-07-14 10:13:58.179608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-07-14 10:13:58.179622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.391 [2024-07-14 10:13:58.179637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-07-14 10:13:58.179650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.391 [2024-07-14 10:13:58.179665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-07-14 10:13:58.179679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.391 [2024-07-14 10:13:58.179694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:20480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-07-14 10:13:58.179707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.391 [2024-07-14 10:13:58.179722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-07-14 10:13:58.179736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.391 [2024-07-14 10:13:58.179751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-07-14 10:13:58.179764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.391 [2024-07-14 10:13:58.179779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-07-14 10:13:58.179792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.391 [2024-07-14 10:13:58.179807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-07-14 10:13:58.179820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.391 [2024-07-14 10:13:58.179850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-07-14 10:13:58.179863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.391 [2024-07-14 10:13:58.179878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-07-14 10:13:58.179891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.391 [2024-07-14 10:13:58.179906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:19712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-07-14 10:13:58.179919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.391 [2024-07-14 10:13:58.179935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:19728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-07-14 10:13:58.179962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.391 [2024-07-14 10:13:58.179979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:19792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-07-14 10:13:58.179992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.391 [2024-07-14 10:13:58.180007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:19800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-07-14 10:13:58.180022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.391 [2024-07-14 10:13:58.180037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-07-14 10:13:58.180050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.391 [2024-07-14 10:13:58.180065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-07-14 10:13:58.180078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.391 [2024-07-14 10:13:58.180094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-07-14 10:13:58.180106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.391 [2024-07-14 10:13:58.180121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:19848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-07-14 10:13:58.180134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.391 [2024-07-14 10:13:58.180150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-07-14 10:13:58.180164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.391 [2024-07-14 10:13:58.180179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-07-14 10:13:58.180192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.391 [2024-07-14 10:13:58.180207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.391 [2024-07-14 10:13:58.180253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.391 [2024-07-14 10:13:58.180270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:19968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-07-14 10:13:58.180284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.180299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:19992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-07-14 10:13:58.180312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.180328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-07-14 10:13:58.180341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.180365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:20552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-07-14 10:13:58.180379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.180395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-07-14 10:13:58.180408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.180424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.392 [2024-07-14 10:13:58.180438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.180454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:20600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.392 [2024-07-14 10:13:58.180468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.180484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.392 [2024-07-14 10:13:58.180498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.180513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:20616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.392 [2024-07-14 10:13:58.180527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.180543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-07-14 10:13:58.180563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.180579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-07-14 10:13:58.180593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.180623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:20640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-07-14 10:13:58.180636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.180651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-07-14 10:13:58.180664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.180681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.392 [2024-07-14 10:13:58.180695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.180710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.392 [2024-07-14 10:13:58.180723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.180738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:20672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-07-14 10:13:58.180758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.180774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.392 [2024-07-14 10:13:58.180787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.180802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-07-14 10:13:58.180815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.180830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:20696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-07-14 10:13:58.180843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.180858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-07-14 10:13:58.180871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.180886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.392 [2024-07-14 10:13:58.180900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.180915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:20720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.392 [2024-07-14 10:13:58.180928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.180943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:20728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.392 [2024-07-14 10:13:58.180957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.180973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.392 [2024-07-14 10:13:58.180986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.181002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-07-14 10:13:58.181015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.181030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:20752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.392 [2024-07-14 10:13:58.181043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.181058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:20760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.392 [2024-07-14 10:13:58.181071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.181086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-07-14 10:13:58.181099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.181114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-07-14 10:13:58.181134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.181150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-07-14 10:13:58.181164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.181179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:20072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-07-14 10:13:58.181192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.181207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-07-14 10:13:58.181247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.181264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.392 [2024-07-14 10:13:58.181278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.392 [2024-07-14 10:13:58.181294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:20128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-07-14 10:13:58.181308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.393 [2024-07-14 10:13:58.181324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-07-14 10:13:58.181337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.393 [2024-07-14 10:13:58.181353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:20160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-07-14 10:13:58.181366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.393 [2024-07-14 10:13:58.181382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:20176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-07-14 10:13:58.181395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.393 [2024-07-14 10:13:58.181411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-07-14 10:13:58.181425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.393 [2024-07-14 10:13:58.181440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-07-14 10:13:58.181454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.393 [2024-07-14 10:13:58.181470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-07-14 10:13:58.181483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.393 [2024-07-14 10:13:58.181499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:20256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-07-14 10:13:58.181512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.393 [2024-07-14 10:13:58.181535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-07-14 10:13:58.181549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.393 [2024-07-14 10:13:58.181565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-07-14 10:13:58.181578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.393 [2024-07-14 10:13:58.181609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:20768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.393 [2024-07-14 10:13:58.181622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.393 [2024-07-14 10:13:58.181637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:20776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-07-14 10:13:58.181650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.393 [2024-07-14 10:13:58.181665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:20784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-07-14 10:13:58.181678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.393 [2024-07-14 10:13:58.181693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-07-14 10:13:58.181706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.393 [2024-07-14 10:13:58.181721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:20800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.393 [2024-07-14 10:13:58.181735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.393 [2024-07-14 10:13:58.181750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:20808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.393 [2024-07-14 10:13:58.181763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.393 [2024-07-14 10:13:58.181778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:20816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-07-14 10:13:58.181791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.393 [2024-07-14 10:13:58.181806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-07-14 10:13:58.181819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.393 [2024-07-14 10:13:58.181835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-07-14 10:13:58.181847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.393 [2024-07-14 10:13:58.181862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-07-14 10:13:58.181875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.393 [2024-07-14 10:13:58.181890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:20848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-07-14 10:13:58.181909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.393 [2024-07-14 10:13:58.181926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:20856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.393 [2024-07-14 10:13:58.181940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.393 [2024-07-14 10:13:58.181955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-07-14 10:13:58.181969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.393 [2024-07-14 10:13:58.181984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:20872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-07-14 10:13:58.181997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.393 [2024-07-14 10:13:58.182012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:20880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.393 [2024-07-14 10:13:58.182025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.393 [2024-07-14 10:13:58.182040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.393 [2024-07-14 10:13:58.182053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.393 [2024-07-14 10:13:58.182068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.394 [2024-07-14 10:13:58.182081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.182097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:20904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.394 [2024-07-14 10:13:58.182110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.182125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.394 [2024-07-14 10:13:58.182138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.182153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.394 [2024-07-14 10:13:58.182166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.182182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.394 [2024-07-14 10:13:58.182195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.182227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.394 [2024-07-14 10:13:58.182253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.182270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.394 [2024-07-14 10:13:58.182283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.182306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:20952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.394 [2024-07-14 10:13:58.182321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.182336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:20960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.394 [2024-07-14 10:13:58.182350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.182365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:20968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.394 [2024-07-14 10:13:58.182378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.182394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:20976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.394 [2024-07-14 10:13:58.182408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.182424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.394 [2024-07-14 10:13:58.182438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.182453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.394 [2024-07-14 10:13:58.182468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.182484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:21000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.394 [2024-07-14 10:13:58.182497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.182512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:21008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.394 [2024-07-14 10:13:58.182526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.182541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:21016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.394 [2024-07-14 10:13:58.182555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.182571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:21024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.394 [2024-07-14 10:13:58.182584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.182614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:21032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.394 [2024-07-14 10:13:58.182627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.182642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.394 [2024-07-14 10:13:58.182655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.182670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.394 [2024-07-14 10:13:58.182689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.182705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:21056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.394 [2024-07-14 10:13:58.182718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.182733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.394 [2024-07-14 10:13:58.182746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.182761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:21072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.394 [2024-07-14 10:13:58.182775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.182790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:21080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.394 [2024-07-14 10:13:58.182803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.182850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.394 [2024-07-14 10:13:58.182865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.182881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.394 [2024-07-14 10:13:58.182895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.182911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:34.394 [2024-07-14 10:13:58.182924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.182941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:20344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.394 [2024-07-14 10:13:58.182955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.182970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.394 [2024-07-14 10:13:58.182984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.182999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.394 [2024-07-14 10:13:58.183012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.183028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.394 [2024-07-14 10:13:58.183042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.183058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.394 [2024-07-14 10:13:58.183072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.183087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:20416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.394 [2024-07-14 10:13:58.183108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.394 [2024-07-14 10:13:58.183124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:20432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.394 [2024-07-14 10:13:58.183139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.395 [2024-07-14 10:13:58.183169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.395 [2024-07-14 10:13:58.183182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.395 [2024-07-14 10:13:58.183197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.395 [2024-07-14 10:13:58.183210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.395 [2024-07-14 10:13:58.183225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:20520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.395 [2024-07-14 10:13:58.183266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.395 [2024-07-14 10:13:58.183282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:20528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.395 [2024-07-14 10:13:58.183296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.395 [2024-07-14 10:13:58.183317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:20536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.395 [2024-07-14 10:13:58.183332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.395 [2024-07-14 10:13:58.183347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.395 [2024-07-14 10:13:58.183361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.395 [2024-07-14 10:13:58.183377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.395 [2024-07-14 10:13:58.183391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.395 [2024-07-14 10:13:58.183406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:34.395 [2024-07-14 10:13:58.183420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.395 [2024-07-14 10:13:58.183435] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c827a0 is same with the state(5) to be set 00:21:34.395 [2024-07-14 10:13:58.183452] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:34.395 [2024-07-14 10:13:58.183463] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:34.395 [2024-07-14 10:13:58.183474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20576 len:8 PRP1 0x0 PRP2 0x0 00:21:34.395 [2024-07-14 10:13:58.183487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.395 [2024-07-14 10:13:58.183546] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c827a0 was disconnected and freed. reset controller. 00:21:34.395 [2024-07-14 10:13:58.183565] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:34.395 [2024-07-14 10:13:58.183632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.395 [2024-07-14 10:13:58.183654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.395 [2024-07-14 10:13:58.183670] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.395 [2024-07-14 10:13:58.183684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.395 [2024-07-14 10:13:58.183698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.395 [2024-07-14 10:13:58.183711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.395 [2024-07-14 10:13:58.183726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:34.395 [2024-07-14 10:13:58.183739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:34.395 [2024-07-14 10:13:58.183753] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:34.395 [2024-07-14 10:13:58.186280] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:34.395 [2024-07-14 10:13:58.186319] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1abcf20 (9): Bad file descriptor 00:21:34.395 [2024-07-14 10:13:58.217282] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:34.395 00:21:34.395 Latency(us) 00:21:34.395 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.395 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:34.395 Verification LBA range: start 0x0 length 0x4000 00:21:34.395 NVMe0n1 : 15.01 14317.63 55.93 306.28 0.00 8735.73 569.72 16920.20 00:21:34.395 =================================================================================================================== 00:21:34.395 Total : 14317.63 55.93 306.28 0.00 8735.73 569.72 16920.20 00:21:34.395 Received shutdown signal, test time was about 15.000000 seconds 00:21:34.395 00:21:34.395 Latency(us) 00:21:34.395 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.395 =================================================================================================================== 00:21:34.395 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:34.395 10:14:04 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:34.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:34.395 10:14:04 -- host/failover.sh@65 -- # count=3 00:21:34.395 10:14:04 -- host/failover.sh@67 -- # (( count != 3 )) 00:21:34.395 10:14:04 -- host/failover.sh@73 -- # bdevperf_pid=95098 00:21:34.395 10:14:04 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:34.395 10:14:04 -- host/failover.sh@75 -- # waitforlisten 95098 /var/tmp/bdevperf.sock 00:21:34.395 10:14:04 -- common/autotest_common.sh@819 -- # '[' -z 95098 ']' 00:21:34.395 10:14:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:34.395 10:14:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:34.395 10:14:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:34.395 10:14:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:34.395 10:14:04 -- common/autotest_common.sh@10 -- # set +x 00:21:34.961 10:14:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:34.961 10:14:05 -- common/autotest_common.sh@852 -- # return 0 00:21:34.961 10:14:05 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:34.961 [2024-07-14 10:14:05.472875] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:34.961 10:14:05 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:35.218 [2024-07-14 10:14:05.689079] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:35.218 10:14:05 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:35.476 NVMe0n1 00:21:35.476 10:14:05 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:36.043 00:21:36.043 10:14:06 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:36.043 00:21:36.300 10:14:06 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:36.300 10:14:06 -- host/failover.sh@82 -- # grep -q NVMe0 00:21:36.300 10:14:06 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:36.557 10:14:07 -- host/failover.sh@87 -- # sleep 3 00:21:39.869 10:14:10 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:39.869 10:14:10 -- host/failover.sh@88 -- # grep -q NVMe0 00:21:39.869 10:14:10 -- host/failover.sh@90 -- # run_test_pid=95236 00:21:39.869 10:14:10 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:39.869 10:14:10 -- host/failover.sh@92 -- # wait 95236 00:21:41.243 0 00:21:41.243 10:14:11 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:41.243 [2024-07-14 10:14:04.299417] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:41.243 [2024-07-14 10:14:04.299582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95098 ] 00:21:41.243 [2024-07-14 10:14:04.430307] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.243 [2024-07-14 10:14:04.502182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:41.243 [2024-07-14 10:14:07.028020] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:41.243 [2024-07-14 10:14:07.028148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.244 [2024-07-14 10:14:07.028171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.244 [2024-07-14 10:14:07.028188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.244 [2024-07-14 10:14:07.028201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.244 [2024-07-14 10:14:07.028258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.244 [2024-07-14 10:14:07.028275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.244 [2024-07-14 10:14:07.028289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:41.244 [2024-07-14 10:14:07.028303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:41.244 [2024-07-14 10:14:07.028316] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:41.244 [2024-07-14 10:14:07.028369] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:41.244 [2024-07-14 10:14:07.028400] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb2ff20 (9): Bad file descriptor 00:21:41.244 [2024-07-14 10:14:07.035678] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:41.244 Running I/O for 1 seconds... 00:21:41.244 00:21:41.244 Latency(us) 00:21:41.244 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:41.244 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:41.244 Verification LBA range: start 0x0 length 0x4000 00:21:41.244 NVMe0n1 : 1.01 14888.07 58.16 0.00 0.00 8558.25 1385.19 10604.92 00:21:41.244 =================================================================================================================== 00:21:41.244 Total : 14888.07 58.16 0.00 0.00 8558.25 1385.19 10604.92 00:21:41.244 10:14:11 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:41.244 10:14:11 -- host/failover.sh@95 -- # grep -q NVMe0 00:21:41.244 10:14:11 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:41.501 10:14:11 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:41.501 10:14:11 -- host/failover.sh@99 -- # grep -q NVMe0 00:21:41.759 10:14:12 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:42.016 10:14:12 -- host/failover.sh@101 -- # sleep 3 00:21:45.300 10:14:15 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:45.300 10:14:15 -- host/failover.sh@103 -- # grep -q NVMe0 00:21:45.300 10:14:15 -- host/failover.sh@108 -- # killprocess 95098 00:21:45.300 10:14:15 -- common/autotest_common.sh@926 -- # '[' -z 95098 ']' 00:21:45.300 10:14:15 -- common/autotest_common.sh@930 -- # kill -0 95098 00:21:45.300 10:14:15 -- common/autotest_common.sh@931 -- # uname 00:21:45.300 10:14:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:45.300 10:14:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 95098 00:21:45.300 10:14:15 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:21:45.300 10:14:15 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:21:45.300 10:14:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 95098' 00:21:45.300 killing process with pid 95098 00:21:45.300 10:14:15 -- common/autotest_common.sh@945 -- # kill 95098 00:21:45.300 10:14:15 -- common/autotest_common.sh@950 -- # wait 95098 00:21:45.300 10:14:15 -- host/failover.sh@110 -- # sync 00:21:45.300 10:14:15 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:45.559 10:14:16 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:45.559 10:14:16 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:21:45.559 10:14:16 -- host/failover.sh@116 -- # nvmftestfini 00:21:45.559 10:14:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:45.559 10:14:16 -- nvmf/common.sh@116 -- # sync 00:21:45.559 10:14:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:45.559 10:14:16 -- nvmf/common.sh@119 -- # set +e 00:21:45.559 10:14:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:45.559 10:14:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:45.559 rmmod nvme_tcp 00:21:45.559 rmmod nvme_fabrics 00:21:45.559 rmmod nvme_keyring 00:21:45.559 10:14:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:45.559 10:14:16 -- nvmf/common.sh@123 -- # set -e 00:21:45.559 10:14:16 -- nvmf/common.sh@124 -- # return 0 00:21:45.559 10:14:16 -- nvmf/common.sh@477 -- # '[' -n 94731 ']' 00:21:45.559 10:14:16 -- nvmf/common.sh@478 -- # killprocess 94731 00:21:45.559 10:14:16 -- common/autotest_common.sh@926 -- # '[' -z 94731 ']' 00:21:45.559 10:14:16 -- common/autotest_common.sh@930 -- # kill -0 94731 00:21:45.559 10:14:16 -- common/autotest_common.sh@931 -- # uname 00:21:45.559 10:14:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:45.559 10:14:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 94731 00:21:45.817 10:14:16 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:45.817 killing process with pid 94731 00:21:45.817 10:14:16 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:45.817 10:14:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 94731' 00:21:45.817 10:14:16 -- common/autotest_common.sh@945 -- # kill 94731 00:21:45.817 10:14:16 -- common/autotest_common.sh@950 -- # wait 94731 00:21:45.817 10:14:16 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:45.817 10:14:16 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:45.817 10:14:16 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:45.817 10:14:16 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:45.817 10:14:16 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:45.817 10:14:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.817 10:14:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:45.817 10:14:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.075 10:14:16 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:46.075 00:21:46.075 real 0m32.181s 00:21:46.075 user 2m4.561s 00:21:46.075 sys 0m4.747s 00:21:46.075 10:14:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:46.075 10:14:16 -- common/autotest_common.sh@10 -- # set +x 00:21:46.075 ************************************ 00:21:46.075 END TEST nvmf_failover 00:21:46.075 ************************************ 00:21:46.075 10:14:16 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:46.075 10:14:16 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:46.075 10:14:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:46.075 10:14:16 -- common/autotest_common.sh@10 -- # set +x 00:21:46.075 ************************************ 00:21:46.075 START TEST nvmf_discovery 00:21:46.075 ************************************ 00:21:46.075 10:14:16 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:46.075 * Looking for test storage... 00:21:46.075 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:46.075 10:14:16 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:46.075 10:14:16 -- nvmf/common.sh@7 -- # uname -s 00:21:46.075 10:14:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:46.075 10:14:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:46.076 10:14:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:46.076 10:14:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:46.076 10:14:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:46.076 10:14:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:46.076 10:14:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:46.076 10:14:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:46.076 10:14:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:46.076 10:14:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:46.076 10:14:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:21:46.076 10:14:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:21:46.076 10:14:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:46.076 10:14:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:46.076 10:14:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:46.076 10:14:16 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:46.076 10:14:16 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:46.076 10:14:16 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:46.076 10:14:16 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:46.076 10:14:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.076 10:14:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.076 10:14:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.076 10:14:16 -- paths/export.sh@5 -- # export PATH 00:21:46.076 10:14:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:46.076 10:14:16 -- nvmf/common.sh@46 -- # : 0 00:21:46.076 10:14:16 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:46.076 10:14:16 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:46.076 10:14:16 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:46.076 10:14:16 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:46.076 10:14:16 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:46.076 10:14:16 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:46.076 10:14:16 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:46.076 10:14:16 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:46.076 10:14:16 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:46.076 10:14:16 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:46.076 10:14:16 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:46.076 10:14:16 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:46.076 10:14:16 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:46.076 10:14:16 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:46.076 10:14:16 -- host/discovery.sh@25 -- # nvmftestinit 00:21:46.076 10:14:16 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:46.076 10:14:16 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:46.076 10:14:16 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:46.076 10:14:16 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:46.076 10:14:16 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:46.076 10:14:16 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:46.076 10:14:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:46.076 10:14:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:46.076 10:14:16 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:46.076 10:14:16 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:46.076 10:14:16 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:46.076 10:14:16 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:46.076 10:14:16 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:46.076 10:14:16 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:46.076 10:14:16 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:46.076 10:14:16 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:46.076 10:14:16 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:46.076 10:14:16 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:46.076 10:14:16 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:46.076 10:14:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:46.076 10:14:16 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:46.076 10:14:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:46.076 10:14:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:46.076 10:14:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:46.076 10:14:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:46.076 10:14:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:46.076 10:14:16 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:46.076 10:14:16 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:46.076 Cannot find device "nvmf_tgt_br" 00:21:46.076 10:14:16 -- nvmf/common.sh@154 -- # true 00:21:46.076 10:14:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:46.076 Cannot find device "nvmf_tgt_br2" 00:21:46.076 10:14:16 -- nvmf/common.sh@155 -- # true 00:21:46.076 10:14:16 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:46.076 10:14:16 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:46.076 Cannot find device "nvmf_tgt_br" 00:21:46.076 10:14:16 -- nvmf/common.sh@157 -- # true 00:21:46.076 10:14:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:46.076 Cannot find device "nvmf_tgt_br2" 00:21:46.076 10:14:16 -- nvmf/common.sh@158 -- # true 00:21:46.076 10:14:16 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:21:46.335 10:14:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:21:46.335 10:14:16 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:46.335 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:46.335 10:14:16 -- nvmf/common.sh@161 -- # true 00:21:46.335 10:14:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:46.335 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:46.335 10:14:16 -- nvmf/common.sh@162 -- # true 00:21:46.335 10:14:16 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:21:46.335 10:14:16 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:46.335 10:14:16 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:46.335 10:14:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:46.335 10:14:16 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:46.335 10:14:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:46.335 10:14:16 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:46.335 10:14:16 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:46.335 10:14:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:46.335 10:14:16 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:21:46.335 10:14:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:21:46.335 10:14:16 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:21:46.335 10:14:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:21:46.335 10:14:16 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:46.335 10:14:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:46.335 10:14:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:46.335 10:14:16 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:21:46.335 10:14:16 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:21:46.335 10:14:16 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:21:46.335 10:14:16 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:46.335 10:14:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:46.335 10:14:16 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:46.335 10:14:16 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:46.335 10:14:16 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:21:46.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:46.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:21:46.335 00:21:46.335 --- 10.0.0.2 ping statistics --- 00:21:46.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.335 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:21:46.335 10:14:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:21:46.335 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:46.335 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.036 ms 00:21:46.335 00:21:46.335 --- 10.0.0.3 ping statistics --- 00:21:46.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.335 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:21:46.335 10:14:16 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:46.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:46.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:21:46.335 00:21:46.335 --- 10.0.0.1 ping statistics --- 00:21:46.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:46.335 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:21:46.335 10:14:16 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:46.335 10:14:16 -- nvmf/common.sh@421 -- # return 0 00:21:46.335 10:14:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:46.335 10:14:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:46.335 10:14:16 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:21:46.335 10:14:16 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:21:46.335 10:14:16 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:46.335 10:14:16 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:21:46.335 10:14:16 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:21:46.335 10:14:16 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:46.335 10:14:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:46.335 10:14:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:21:46.335 10:14:16 -- common/autotest_common.sh@10 -- # set +x 00:21:46.335 10:14:16 -- nvmf/common.sh@469 -- # nvmfpid=95526 00:21:46.335 10:14:16 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:46.335 10:14:16 -- nvmf/common.sh@470 -- # waitforlisten 95526 00:21:46.335 10:14:16 -- common/autotest_common.sh@819 -- # '[' -z 95526 ']' 00:21:46.335 10:14:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:46.335 10:14:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:46.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:46.335 10:14:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:46.335 10:14:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:46.335 10:14:16 -- common/autotest_common.sh@10 -- # set +x 00:21:46.594 [2024-07-14 10:14:16.881359] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:46.594 [2024-07-14 10:14:16.881445] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:46.594 [2024-07-14 10:14:17.024265] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.594 [2024-07-14 10:14:17.093220] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:46.594 [2024-07-14 10:14:17.093384] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:46.594 [2024-07-14 10:14:17.093399] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:46.594 [2024-07-14 10:14:17.093409] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:46.594 [2024-07-14 10:14:17.093434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:47.531 10:14:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:47.531 10:14:17 -- common/autotest_common.sh@852 -- # return 0 00:21:47.531 10:14:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:47.531 10:14:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:21:47.531 10:14:17 -- common/autotest_common.sh@10 -- # set +x 00:21:47.531 10:14:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:47.531 10:14:17 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:47.531 10:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.531 10:14:17 -- common/autotest_common.sh@10 -- # set +x 00:21:47.531 [2024-07-14 10:14:17.783859] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:47.531 10:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.531 10:14:17 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:47.531 10:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.531 10:14:17 -- common/autotest_common.sh@10 -- # set +x 00:21:47.531 [2024-07-14 10:14:17.791997] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:47.531 10:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.531 10:14:17 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:47.531 10:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.531 10:14:17 -- common/autotest_common.sh@10 -- # set +x 00:21:47.531 null0 00:21:47.531 10:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.531 10:14:17 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:47.531 10:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.531 10:14:17 -- common/autotest_common.sh@10 -- # set +x 00:21:47.531 null1 00:21:47.531 10:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.531 10:14:17 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:47.531 10:14:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:47.531 10:14:17 -- common/autotest_common.sh@10 -- # set +x 00:21:47.531 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:47.531 10:14:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:47.531 10:14:17 -- host/discovery.sh@45 -- # hostpid=95577 00:21:47.531 10:14:17 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:47.531 10:14:17 -- host/discovery.sh@46 -- # waitforlisten 95577 /tmp/host.sock 00:21:47.531 10:14:17 -- common/autotest_common.sh@819 -- # '[' -z 95577 ']' 00:21:47.531 10:14:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:21:47.531 10:14:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:21:47.531 10:14:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:47.531 10:14:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:21:47.531 10:14:17 -- common/autotest_common.sh@10 -- # set +x 00:21:47.531 [2024-07-14 10:14:17.877186] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:21:47.531 [2024-07-14 10:14:17.877478] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid95577 ] 00:21:47.531 [2024-07-14 10:14:18.017921] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:47.790 [2024-07-14 10:14:18.090359] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:47.790 [2024-07-14 10:14:18.090892] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:48.356 10:14:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:21:48.356 10:14:18 -- common/autotest_common.sh@852 -- # return 0 00:21:48.356 10:14:18 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:48.356 10:14:18 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:48.356 10:14:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.356 10:14:18 -- common/autotest_common.sh@10 -- # set +x 00:21:48.356 10:14:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.356 10:14:18 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:48.356 10:14:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.356 10:14:18 -- common/autotest_common.sh@10 -- # set +x 00:21:48.356 10:14:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.356 10:14:18 -- host/discovery.sh@72 -- # notify_id=0 00:21:48.356 10:14:18 -- host/discovery.sh@78 -- # get_subsystem_names 00:21:48.356 10:14:18 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:48.356 10:14:18 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:48.356 10:14:18 -- host/discovery.sh@59 -- # sort 00:21:48.357 10:14:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.357 10:14:18 -- common/autotest_common.sh@10 -- # set +x 00:21:48.357 10:14:18 -- host/discovery.sh@59 -- # xargs 00:21:48.357 10:14:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.614 10:14:18 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:21:48.614 10:14:18 -- host/discovery.sh@79 -- # get_bdev_list 00:21:48.614 10:14:18 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:48.614 10:14:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.614 10:14:18 -- common/autotest_common.sh@10 -- # set +x 00:21:48.614 10:14:18 -- host/discovery.sh@55 -- # sort 00:21:48.614 10:14:18 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:48.614 10:14:18 -- host/discovery.sh@55 -- # xargs 00:21:48.614 10:14:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.614 10:14:18 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:21:48.614 10:14:18 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:48.615 10:14:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.615 10:14:18 -- common/autotest_common.sh@10 -- # set +x 00:21:48.615 10:14:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.615 10:14:18 -- host/discovery.sh@82 -- # get_subsystem_names 00:21:48.615 10:14:18 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:48.615 10:14:18 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:48.615 10:14:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.615 10:14:18 -- common/autotest_common.sh@10 -- # set +x 00:21:48.615 10:14:18 -- host/discovery.sh@59 -- # sort 00:21:48.615 10:14:18 -- host/discovery.sh@59 -- # xargs 00:21:48.615 10:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.615 10:14:19 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:21:48.615 10:14:19 -- host/discovery.sh@83 -- # get_bdev_list 00:21:48.615 10:14:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:48.615 10:14:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:48.615 10:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.615 10:14:19 -- common/autotest_common.sh@10 -- # set +x 00:21:48.615 10:14:19 -- host/discovery.sh@55 -- # sort 00:21:48.615 10:14:19 -- host/discovery.sh@55 -- # xargs 00:21:48.615 10:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.615 10:14:19 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:48.615 10:14:19 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:48.615 10:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.615 10:14:19 -- common/autotest_common.sh@10 -- # set +x 00:21:48.615 10:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.615 10:14:19 -- host/discovery.sh@86 -- # get_subsystem_names 00:21:48.615 10:14:19 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:48.615 10:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.615 10:14:19 -- common/autotest_common.sh@10 -- # set +x 00:21:48.615 10:14:19 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:48.615 10:14:19 -- host/discovery.sh@59 -- # xargs 00:21:48.615 10:14:19 -- host/discovery.sh@59 -- # sort 00:21:48.615 10:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.615 10:14:19 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:21:48.615 10:14:19 -- host/discovery.sh@87 -- # get_bdev_list 00:21:48.615 10:14:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:48.873 10:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.873 10:14:19 -- common/autotest_common.sh@10 -- # set +x 00:21:48.873 10:14:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:48.873 10:14:19 -- host/discovery.sh@55 -- # xargs 00:21:48.873 10:14:19 -- host/discovery.sh@55 -- # sort 00:21:48.873 10:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.873 10:14:19 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:48.873 10:14:19 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:48.873 10:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.873 10:14:19 -- common/autotest_common.sh@10 -- # set +x 00:21:48.873 [2024-07-14 10:14:19.220362] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:48.873 10:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.873 10:14:19 -- host/discovery.sh@92 -- # get_subsystem_names 00:21:48.873 10:14:19 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:48.873 10:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.873 10:14:19 -- common/autotest_common.sh@10 -- # set +x 00:21:48.873 10:14:19 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:48.873 10:14:19 -- host/discovery.sh@59 -- # xargs 00:21:48.873 10:14:19 -- host/discovery.sh@59 -- # sort 00:21:48.873 10:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.873 10:14:19 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:48.873 10:14:19 -- host/discovery.sh@93 -- # get_bdev_list 00:21:48.873 10:14:19 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:48.873 10:14:19 -- host/discovery.sh@55 -- # sort 00:21:48.873 10:14:19 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:48.873 10:14:19 -- host/discovery.sh@55 -- # xargs 00:21:48.873 10:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.873 10:14:19 -- common/autotest_common.sh@10 -- # set +x 00:21:48.873 10:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.873 10:14:19 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:21:48.873 10:14:19 -- host/discovery.sh@94 -- # get_notification_count 00:21:48.873 10:14:19 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:48.873 10:14:19 -- host/discovery.sh@74 -- # jq '. | length' 00:21:48.873 10:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.873 10:14:19 -- common/autotest_common.sh@10 -- # set +x 00:21:48.873 10:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.873 10:14:19 -- host/discovery.sh@74 -- # notification_count=0 00:21:48.873 10:14:19 -- host/discovery.sh@75 -- # notify_id=0 00:21:48.873 10:14:19 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:21:48.873 10:14:19 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:48.873 10:14:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:48.873 10:14:19 -- common/autotest_common.sh@10 -- # set +x 00:21:48.873 10:14:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:48.873 10:14:19 -- host/discovery.sh@100 -- # sleep 1 00:21:49.438 [2024-07-14 10:14:19.866771] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:49.438 [2024-07-14 10:14:19.866859] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:49.438 [2024-07-14 10:14:19.866878] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:49.438 [2024-07-14 10:14:19.952895] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:49.695 [2024-07-14 10:14:20.009086] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:49.695 [2024-07-14 10:14:20.009142] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:49.953 10:14:20 -- host/discovery.sh@101 -- # get_subsystem_names 00:21:49.953 10:14:20 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:49.953 10:14:20 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:49.953 10:14:20 -- host/discovery.sh@59 -- # sort 00:21:49.953 10:14:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.953 10:14:20 -- host/discovery.sh@59 -- # xargs 00:21:49.953 10:14:20 -- common/autotest_common.sh@10 -- # set +x 00:21:49.953 10:14:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:49.953 10:14:20 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.953 10:14:20 -- host/discovery.sh@102 -- # get_bdev_list 00:21:49.953 10:14:20 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:49.953 10:14:20 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:49.953 10:14:20 -- host/discovery.sh@55 -- # sort 00:21:49.953 10:14:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:49.953 10:14:20 -- host/discovery.sh@55 -- # xargs 00:21:49.953 10:14:20 -- common/autotest_common.sh@10 -- # set +x 00:21:49.953 10:14:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:50.214 10:14:20 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:50.214 10:14:20 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:21:50.214 10:14:20 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:50.214 10:14:20 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:50.214 10:14:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:50.214 10:14:20 -- host/discovery.sh@63 -- # sort -n 00:21:50.214 10:14:20 -- common/autotest_common.sh@10 -- # set +x 00:21:50.214 10:14:20 -- host/discovery.sh@63 -- # xargs 00:21:50.214 10:14:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:50.214 10:14:20 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:21:50.214 10:14:20 -- host/discovery.sh@104 -- # get_notification_count 00:21:50.214 10:14:20 -- host/discovery.sh@74 -- # jq '. | length' 00:21:50.214 10:14:20 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:50.214 10:14:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:50.214 10:14:20 -- common/autotest_common.sh@10 -- # set +x 00:21:50.214 10:14:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:50.214 10:14:20 -- host/discovery.sh@74 -- # notification_count=1 00:21:50.214 10:14:20 -- host/discovery.sh@75 -- # notify_id=1 00:21:50.214 10:14:20 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:21:50.214 10:14:20 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:50.214 10:14:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:50.214 10:14:20 -- common/autotest_common.sh@10 -- # set +x 00:21:50.214 10:14:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:50.214 10:14:20 -- host/discovery.sh@109 -- # sleep 1 00:21:51.157 10:14:21 -- host/discovery.sh@110 -- # get_bdev_list 00:21:51.157 10:14:21 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:51.157 10:14:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:51.157 10:14:21 -- common/autotest_common.sh@10 -- # set +x 00:21:51.157 10:14:21 -- host/discovery.sh@55 -- # sort 00:21:51.157 10:14:21 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:51.157 10:14:21 -- host/discovery.sh@55 -- # xargs 00:21:51.157 10:14:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:51.157 10:14:21 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:51.157 10:14:21 -- host/discovery.sh@111 -- # get_notification_count 00:21:51.157 10:14:21 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:51.157 10:14:21 -- host/discovery.sh@74 -- # jq '. | length' 00:21:51.157 10:14:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:51.157 10:14:21 -- common/autotest_common.sh@10 -- # set +x 00:21:51.157 10:14:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:51.416 10:14:21 -- host/discovery.sh@74 -- # notification_count=1 00:21:51.416 10:14:21 -- host/discovery.sh@75 -- # notify_id=2 00:21:51.416 10:14:21 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:21:51.416 10:14:21 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:51.416 10:14:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:51.416 10:14:21 -- common/autotest_common.sh@10 -- # set +x 00:21:51.416 [2024-07-14 10:14:21.737458] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:51.416 [2024-07-14 10:14:21.737769] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:51.416 [2024-07-14 10:14:21.737798] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:51.416 10:14:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:51.416 10:14:21 -- host/discovery.sh@117 -- # sleep 1 00:21:51.416 [2024-07-14 10:14:21.823806] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:51.416 [2024-07-14 10:14:21.881027] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:51.416 [2024-07-14 10:14:21.881047] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:51.416 [2024-07-14 10:14:21.881053] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:52.351 10:14:22 -- host/discovery.sh@118 -- # get_subsystem_names 00:21:52.352 10:14:22 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:52.352 10:14:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:52.352 10:14:22 -- common/autotest_common.sh@10 -- # set +x 00:21:52.352 10:14:22 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:52.352 10:14:22 -- host/discovery.sh@59 -- # sort 00:21:52.352 10:14:22 -- host/discovery.sh@59 -- # xargs 00:21:52.352 10:14:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:52.352 10:14:22 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.352 10:14:22 -- host/discovery.sh@119 -- # get_bdev_list 00:21:52.352 10:14:22 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:52.352 10:14:22 -- host/discovery.sh@55 -- # sort 00:21:52.352 10:14:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:52.352 10:14:22 -- host/discovery.sh@55 -- # xargs 00:21:52.352 10:14:22 -- common/autotest_common.sh@10 -- # set +x 00:21:52.352 10:14:22 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:52.352 10:14:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:52.352 10:14:22 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:52.352 10:14:22 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:21:52.352 10:14:22 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:52.352 10:14:22 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:52.352 10:14:22 -- host/discovery.sh@63 -- # xargs 00:21:52.352 10:14:22 -- host/discovery.sh@63 -- # sort -n 00:21:52.352 10:14:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:52.352 10:14:22 -- common/autotest_common.sh@10 -- # set +x 00:21:52.352 10:14:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:52.611 10:14:22 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:52.611 10:14:22 -- host/discovery.sh@121 -- # get_notification_count 00:21:52.611 10:14:22 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:52.611 10:14:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:52.611 10:14:22 -- common/autotest_common.sh@10 -- # set +x 00:21:52.611 10:14:22 -- host/discovery.sh@74 -- # jq '. | length' 00:21:52.611 10:14:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:52.611 10:14:22 -- host/discovery.sh@74 -- # notification_count=0 00:21:52.611 10:14:22 -- host/discovery.sh@75 -- # notify_id=2 00:21:52.611 10:14:22 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:21:52.611 10:14:22 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:52.611 10:14:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:52.611 10:14:22 -- common/autotest_common.sh@10 -- # set +x 00:21:52.611 [2024-07-14 10:14:22.974906] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:52.611 [2024-07-14 10:14:22.974933] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:52.611 [2024-07-14 10:14:22.976762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.611 [2024-07-14 10:14:22.976793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.611 [2024-07-14 10:14:22.976821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.611 [2024-07-14 10:14:22.976830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.611 [2024-07-14 10:14:22.976838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.611 [2024-07-14 10:14:22.976846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.611 [2024-07-14 10:14:22.976854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:52.611 [2024-07-14 10:14:22.976862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:52.611 [2024-07-14 10:14:22.976870] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1539150 is same with the state(5) to be set 00:21:52.611 10:14:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:52.611 10:14:22 -- host/discovery.sh@127 -- # sleep 1 00:21:52.611 [2024-07-14 10:14:22.986712] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1539150 (9): Bad file descriptor 00:21:52.611 [2024-07-14 10:14:22.996728] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:52.611 [2024-07-14 10:14:22.996832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:52.611 [2024-07-14 10:14:22.996875] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:52.611 [2024-07-14 10:14:22.996889] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1539150 with addr=10.0.0.2, port=4420 00:21:52.611 [2024-07-14 10:14:22.996898] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1539150 is same with the state(5) to be set 00:21:52.611 [2024-07-14 10:14:22.996912] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1539150 (9): Bad file descriptor 00:21:52.611 [2024-07-14 10:14:22.996925] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:52.611 [2024-07-14 10:14:22.996933] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:52.611 [2024-07-14 10:14:22.996942] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:52.611 [2024-07-14 10:14:22.996955] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:52.611 [2024-07-14 10:14:23.006792] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:52.611 [2024-07-14 10:14:23.006903] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:52.611 [2024-07-14 10:14:23.006944] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:52.611 [2024-07-14 10:14:23.006958] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1539150 with addr=10.0.0.2, port=4420 00:21:52.611 [2024-07-14 10:14:23.006967] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1539150 is same with the state(5) to be set 00:21:52.611 [2024-07-14 10:14:23.006981] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1539150 (9): Bad file descriptor 00:21:52.611 [2024-07-14 10:14:23.007002] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:52.611 [2024-07-14 10:14:23.007010] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:52.611 [2024-07-14 10:14:23.007018] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:52.611 [2024-07-14 10:14:23.007031] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:52.611 [2024-07-14 10:14:23.016876] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:52.611 [2024-07-14 10:14:23.017159] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:52.611 [2024-07-14 10:14:23.017448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:52.611 [2024-07-14 10:14:23.017473] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1539150 with addr=10.0.0.2, port=4420 00:21:52.611 [2024-07-14 10:14:23.017484] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1539150 is same with the state(5) to be set 00:21:52.611 [2024-07-14 10:14:23.017547] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1539150 (9): Bad file descriptor 00:21:52.611 [2024-07-14 10:14:23.017567] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:52.611 [2024-07-14 10:14:23.017576] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:52.611 [2024-07-14 10:14:23.017584] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:52.611 [2024-07-14 10:14:23.017605] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:52.611 [2024-07-14 10:14:23.027120] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:52.611 [2024-07-14 10:14:23.027275] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:52.611 [2024-07-14 10:14:23.027320] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:52.611 [2024-07-14 10:14:23.027334] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1539150 with addr=10.0.0.2, port=4420 00:21:52.611 [2024-07-14 10:14:23.027343] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1539150 is same with the state(5) to be set 00:21:52.611 [2024-07-14 10:14:23.027358] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1539150 (9): Bad file descriptor 00:21:52.611 [2024-07-14 10:14:23.027371] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:52.611 [2024-07-14 10:14:23.027378] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:52.611 [2024-07-14 10:14:23.027386] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:52.611 [2024-07-14 10:14:23.027400] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:52.611 [2024-07-14 10:14:23.037196] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:52.611 [2024-07-14 10:14:23.037287] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:52.611 [2024-07-14 10:14:23.037327] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:52.611 [2024-07-14 10:14:23.037340] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1539150 with addr=10.0.0.2, port=4420 00:21:52.611 [2024-07-14 10:14:23.037349] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1539150 is same with the state(5) to be set 00:21:52.611 [2024-07-14 10:14:23.037362] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1539150 (9): Bad file descriptor 00:21:52.611 [2024-07-14 10:14:23.037374] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:52.611 [2024-07-14 10:14:23.037382] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:52.612 [2024-07-14 10:14:23.037389] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:52.612 [2024-07-14 10:14:23.037401] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:52.612 [2024-07-14 10:14:23.047250] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:52.612 [2024-07-14 10:14:23.047335] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:52.612 [2024-07-14 10:14:23.047375] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:52.612 [2024-07-14 10:14:23.047389] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1539150 with addr=10.0.0.2, port=4420 00:21:52.612 [2024-07-14 10:14:23.047398] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1539150 is same with the state(5) to be set 00:21:52.612 [2024-07-14 10:14:23.047411] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1539150 (9): Bad file descriptor 00:21:52.612 [2024-07-14 10:14:23.047423] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:52.612 [2024-07-14 10:14:23.047430] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:52.612 [2024-07-14 10:14:23.047437] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:52.612 [2024-07-14 10:14:23.047449] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:52.612 [2024-07-14 10:14:23.057294] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:52.612 [2024-07-14 10:14:23.057380] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:52.612 [2024-07-14 10:14:23.057420] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:52.612 [2024-07-14 10:14:23.057433] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1539150 with addr=10.0.0.2, port=4420 00:21:52.612 [2024-07-14 10:14:23.057442] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1539150 is same with the state(5) to be set 00:21:52.612 [2024-07-14 10:14:23.057455] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1539150 (9): Bad file descriptor 00:21:52.612 [2024-07-14 10:14:23.057467] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:52.612 [2024-07-14 10:14:23.057474] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:52.612 [2024-07-14 10:14:23.057489] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:52.612 [2024-07-14 10:14:23.057501] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:52.612 [2024-07-14 10:14:23.061194] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:52.612 [2024-07-14 10:14:23.061244] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:53.552 10:14:23 -- host/discovery.sh@128 -- # get_subsystem_names 00:21:53.552 10:14:23 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:53.552 10:14:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:53.552 10:14:23 -- host/discovery.sh@59 -- # sort 00:21:53.552 10:14:23 -- common/autotest_common.sh@10 -- # set +x 00:21:53.552 10:14:23 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:53.552 10:14:23 -- host/discovery.sh@59 -- # xargs 00:21:53.552 10:14:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:53.552 10:14:24 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.552 10:14:24 -- host/discovery.sh@129 -- # get_bdev_list 00:21:53.552 10:14:24 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:53.552 10:14:24 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:53.552 10:14:24 -- host/discovery.sh@55 -- # sort 00:21:53.552 10:14:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:53.552 10:14:24 -- host/discovery.sh@55 -- # xargs 00:21:53.552 10:14:24 -- common/autotest_common.sh@10 -- # set +x 00:21:53.552 10:14:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:53.810 10:14:24 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:53.810 10:14:24 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:21:53.810 10:14:24 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:53.810 10:14:24 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:53.810 10:14:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:53.810 10:14:24 -- common/autotest_common.sh@10 -- # set +x 00:21:53.810 10:14:24 -- host/discovery.sh@63 -- # sort -n 00:21:53.810 10:14:24 -- host/discovery.sh@63 -- # xargs 00:21:53.810 10:14:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:53.810 10:14:24 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:21:53.810 10:14:24 -- host/discovery.sh@131 -- # get_notification_count 00:21:53.810 10:14:24 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:53.810 10:14:24 -- host/discovery.sh@74 -- # jq '. | length' 00:21:53.810 10:14:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:53.810 10:14:24 -- common/autotest_common.sh@10 -- # set +x 00:21:53.810 10:14:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:53.810 10:14:24 -- host/discovery.sh@74 -- # notification_count=0 00:21:53.810 10:14:24 -- host/discovery.sh@75 -- # notify_id=2 00:21:53.810 10:14:24 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:21:53.810 10:14:24 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:53.810 10:14:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:53.810 10:14:24 -- common/autotest_common.sh@10 -- # set +x 00:21:53.810 10:14:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:53.810 10:14:24 -- host/discovery.sh@135 -- # sleep 1 00:21:54.744 10:14:25 -- host/discovery.sh@136 -- # get_subsystem_names 00:21:54.744 10:14:25 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:54.744 10:14:25 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:54.744 10:14:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:54.744 10:14:25 -- common/autotest_common.sh@10 -- # set +x 00:21:54.744 10:14:25 -- host/discovery.sh@59 -- # sort 00:21:54.744 10:14:25 -- host/discovery.sh@59 -- # xargs 00:21:54.744 10:14:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:55.002 10:14:25 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:21:55.002 10:14:25 -- host/discovery.sh@137 -- # get_bdev_list 00:21:55.002 10:14:25 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:55.002 10:14:25 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:55.002 10:14:25 -- host/discovery.sh@55 -- # sort 00:21:55.002 10:14:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:55.002 10:14:25 -- host/discovery.sh@55 -- # xargs 00:21:55.002 10:14:25 -- common/autotest_common.sh@10 -- # set +x 00:21:55.002 10:14:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:55.002 10:14:25 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:21:55.002 10:14:25 -- host/discovery.sh@138 -- # get_notification_count 00:21:55.002 10:14:25 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:55.002 10:14:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:55.002 10:14:25 -- host/discovery.sh@74 -- # jq '. | length' 00:21:55.002 10:14:25 -- common/autotest_common.sh@10 -- # set +x 00:21:55.002 10:14:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:55.002 10:14:25 -- host/discovery.sh@74 -- # notification_count=2 00:21:55.002 10:14:25 -- host/discovery.sh@75 -- # notify_id=4 00:21:55.002 10:14:25 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:21:55.002 10:14:25 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:55.002 10:14:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:55.002 10:14:25 -- common/autotest_common.sh@10 -- # set +x 00:21:55.937 [2024-07-14 10:14:26.417727] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:55.937 [2024-07-14 10:14:26.417758] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:55.937 [2024-07-14 10:14:26.417791] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:56.195 [2024-07-14 10:14:26.503833] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:56.195 [2024-07-14 10:14:26.562874] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:56.195 [2024-07-14 10:14:26.562914] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:56.195 10:14:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:56.195 10:14:26 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:56.195 10:14:26 -- common/autotest_common.sh@640 -- # local es=0 00:21:56.195 10:14:26 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:56.195 10:14:26 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:21:56.195 10:14:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:56.195 10:14:26 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:21:56.195 10:14:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:56.195 10:14:26 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:56.195 10:14:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:56.195 10:14:26 -- common/autotest_common.sh@10 -- # set +x 00:21:56.195 2024/07/14 10:14:26 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:21:56.195 request: 00:21:56.195 { 00:21:56.195 "method": "bdev_nvme_start_discovery", 00:21:56.195 "params": { 00:21:56.195 "name": "nvme", 00:21:56.195 "trtype": "tcp", 00:21:56.195 "traddr": "10.0.0.2", 00:21:56.195 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:56.195 "adrfam": "ipv4", 00:21:56.195 "trsvcid": "8009", 00:21:56.195 "wait_for_attach": true 00:21:56.195 } 00:21:56.195 } 00:21:56.195 Got JSON-RPC error response 00:21:56.195 GoRPCClient: error on JSON-RPC call 00:21:56.195 10:14:26 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:21:56.195 10:14:26 -- common/autotest_common.sh@643 -- # es=1 00:21:56.195 10:14:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:56.195 10:14:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:56.195 10:14:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:56.195 10:14:26 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:21:56.195 10:14:26 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:56.195 10:14:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:56.195 10:14:26 -- host/discovery.sh@67 -- # sort 00:21:56.195 10:14:26 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:56.195 10:14:26 -- common/autotest_common.sh@10 -- # set +x 00:21:56.195 10:14:26 -- host/discovery.sh@67 -- # xargs 00:21:56.195 10:14:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:56.195 10:14:26 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:21:56.195 10:14:26 -- host/discovery.sh@147 -- # get_bdev_list 00:21:56.195 10:14:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:56.195 10:14:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:56.195 10:14:26 -- host/discovery.sh@55 -- # sort 00:21:56.195 10:14:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:56.195 10:14:26 -- host/discovery.sh@55 -- # xargs 00:21:56.195 10:14:26 -- common/autotest_common.sh@10 -- # set +x 00:21:56.195 10:14:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:56.195 10:14:26 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:56.195 10:14:26 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:56.195 10:14:26 -- common/autotest_common.sh@640 -- # local es=0 00:21:56.195 10:14:26 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:56.195 10:14:26 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:21:56.195 10:14:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:56.195 10:14:26 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:21:56.195 10:14:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:56.196 10:14:26 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:56.196 10:14:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:56.196 10:14:26 -- common/autotest_common.sh@10 -- # set +x 00:21:56.196 2024/07/14 10:14:26 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:21:56.196 request: 00:21:56.196 { 00:21:56.196 "method": "bdev_nvme_start_discovery", 00:21:56.196 "params": { 00:21:56.196 "name": "nvme_second", 00:21:56.196 "trtype": "tcp", 00:21:56.196 "traddr": "10.0.0.2", 00:21:56.196 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:56.196 "adrfam": "ipv4", 00:21:56.196 "trsvcid": "8009", 00:21:56.196 "wait_for_attach": true 00:21:56.196 } 00:21:56.196 } 00:21:56.196 Got JSON-RPC error response 00:21:56.196 GoRPCClient: error on JSON-RPC call 00:21:56.196 10:14:26 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:21:56.196 10:14:26 -- common/autotest_common.sh@643 -- # es=1 00:21:56.196 10:14:26 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:56.196 10:14:26 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:56.196 10:14:26 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:56.196 10:14:26 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:21:56.196 10:14:26 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:56.196 10:14:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:56.196 10:14:26 -- common/autotest_common.sh@10 -- # set +x 00:21:56.196 10:14:26 -- host/discovery.sh@67 -- # sort 00:21:56.196 10:14:26 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:56.196 10:14:26 -- host/discovery.sh@67 -- # xargs 00:21:56.196 10:14:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:56.462 10:14:26 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:21:56.462 10:14:26 -- host/discovery.sh@153 -- # get_bdev_list 00:21:56.462 10:14:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:56.462 10:14:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:56.462 10:14:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:56.462 10:14:26 -- common/autotest_common.sh@10 -- # set +x 00:21:56.462 10:14:26 -- host/discovery.sh@55 -- # sort 00:21:56.462 10:14:26 -- host/discovery.sh@55 -- # xargs 00:21:56.462 10:14:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:56.462 10:14:26 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:56.462 10:14:26 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:56.462 10:14:26 -- common/autotest_common.sh@640 -- # local es=0 00:21:56.462 10:14:26 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:56.462 10:14:26 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:21:56.462 10:14:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:56.462 10:14:26 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:21:56.462 10:14:26 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:21:56.462 10:14:26 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:56.462 10:14:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:56.462 10:14:26 -- common/autotest_common.sh@10 -- # set +x 00:21:57.396 [2024-07-14 10:14:27.833057] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.396 [2024-07-14 10:14:27.833158] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:57.396 [2024-07-14 10:14:27.833175] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14df000 with addr=10.0.0.2, port=8010 00:21:57.396 [2024-07-14 10:14:27.833192] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:57.396 [2024-07-14 10:14:27.833201] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:57.396 [2024-07-14 10:14:27.833209] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:58.330 [2024-07-14 10:14:28.833028] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:58.330 [2024-07-14 10:14:28.833121] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:58.330 [2024-07-14 10:14:28.833137] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14df000 with addr=10.0.0.2, port=8010 00:21:58.330 [2024-07-14 10:14:28.833149] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:58.330 [2024-07-14 10:14:28.833158] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:58.330 [2024-07-14 10:14:28.833165] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:59.704 [2024-07-14 10:14:29.832968] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:59.704 2024/07/14 10:14:29 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:21:59.704 request: 00:21:59.704 { 00:21:59.704 "method": "bdev_nvme_start_discovery", 00:21:59.704 "params": { 00:21:59.704 "name": "nvme_second", 00:21:59.704 "trtype": "tcp", 00:21:59.704 "traddr": "10.0.0.2", 00:21:59.704 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:59.704 "adrfam": "ipv4", 00:21:59.704 "trsvcid": "8010", 00:21:59.704 "attach_timeout_ms": 3000 00:21:59.704 } 00:21:59.704 } 00:21:59.704 Got JSON-RPC error response 00:21:59.704 GoRPCClient: error on JSON-RPC call 00:21:59.704 10:14:29 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:21:59.704 10:14:29 -- common/autotest_common.sh@643 -- # es=1 00:21:59.704 10:14:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:21:59.704 10:14:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:21:59.704 10:14:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:21:59.704 10:14:29 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:21:59.704 10:14:29 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:59.704 10:14:29 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:59.704 10:14:29 -- host/discovery.sh@67 -- # sort 00:21:59.704 10:14:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:21:59.704 10:14:29 -- host/discovery.sh@67 -- # xargs 00:21:59.704 10:14:29 -- common/autotest_common.sh@10 -- # set +x 00:21:59.704 10:14:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:21:59.704 10:14:29 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:21:59.704 10:14:29 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:21:59.704 10:14:29 -- host/discovery.sh@162 -- # kill 95577 00:21:59.704 10:14:29 -- host/discovery.sh@163 -- # nvmftestfini 00:21:59.704 10:14:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:59.704 10:14:29 -- nvmf/common.sh@116 -- # sync 00:21:59.704 10:14:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:21:59.704 10:14:29 -- nvmf/common.sh@119 -- # set +e 00:21:59.704 10:14:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:59.704 10:14:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:21:59.704 rmmod nvme_tcp 00:21:59.704 rmmod nvme_fabrics 00:21:59.704 rmmod nvme_keyring 00:21:59.704 10:14:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:59.704 10:14:29 -- nvmf/common.sh@123 -- # set -e 00:21:59.704 10:14:29 -- nvmf/common.sh@124 -- # return 0 00:21:59.704 10:14:29 -- nvmf/common.sh@477 -- # '[' -n 95526 ']' 00:21:59.704 10:14:29 -- nvmf/common.sh@478 -- # killprocess 95526 00:21:59.704 10:14:29 -- common/autotest_common.sh@926 -- # '[' -z 95526 ']' 00:21:59.704 10:14:29 -- common/autotest_common.sh@930 -- # kill -0 95526 00:21:59.704 10:14:29 -- common/autotest_common.sh@931 -- # uname 00:21:59.704 10:14:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:21:59.704 10:14:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 95526 00:21:59.704 10:14:30 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:21:59.704 10:14:30 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:21:59.704 killing process with pid 95526 00:21:59.704 10:14:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 95526' 00:21:59.704 10:14:30 -- common/autotest_common.sh@945 -- # kill 95526 00:21:59.704 10:14:30 -- common/autotest_common.sh@950 -- # wait 95526 00:21:59.704 10:14:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:59.704 10:14:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:21:59.704 10:14:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:21:59.704 10:14:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:59.704 10:14:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:21:59.704 10:14:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.704 10:14:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:59.704 10:14:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.964 10:14:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:21:59.964 ************************************ 00:21:59.964 END TEST nvmf_discovery 00:21:59.964 ************************************ 00:21:59.964 00:21:59.964 real 0m13.840s 00:21:59.964 user 0m27.232s 00:21:59.964 sys 0m1.675s 00:21:59.964 10:14:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:59.964 10:14:30 -- common/autotest_common.sh@10 -- # set +x 00:21:59.964 10:14:30 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:59.964 10:14:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:21:59.964 10:14:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:21:59.964 10:14:30 -- common/autotest_common.sh@10 -- # set +x 00:21:59.964 ************************************ 00:21:59.964 START TEST nvmf_discovery_remove_ifc 00:21:59.964 ************************************ 00:21:59.964 10:14:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:21:59.964 * Looking for test storage... 00:21:59.964 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:59.964 10:14:30 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:59.964 10:14:30 -- nvmf/common.sh@7 -- # uname -s 00:21:59.964 10:14:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.964 10:14:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.964 10:14:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.964 10:14:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.964 10:14:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:59.964 10:14:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:59.964 10:14:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.964 10:14:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:59.964 10:14:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.964 10:14:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:59.964 10:14:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:21:59.964 10:14:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:21:59.964 10:14:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.964 10:14:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:59.964 10:14:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:59.964 10:14:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:59.964 10:14:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.964 10:14:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.964 10:14:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.964 10:14:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.964 10:14:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.964 10:14:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.964 10:14:30 -- paths/export.sh@5 -- # export PATH 00:21:59.964 10:14:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.964 10:14:30 -- nvmf/common.sh@46 -- # : 0 00:21:59.964 10:14:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:59.964 10:14:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:59.964 10:14:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:59.964 10:14:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.964 10:14:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.964 10:14:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:59.964 10:14:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:59.964 10:14:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:59.964 10:14:30 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:21:59.964 10:14:30 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:21:59.964 10:14:30 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:21:59.964 10:14:30 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:21:59.964 10:14:30 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:21:59.964 10:14:30 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:21:59.964 10:14:30 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:21:59.964 10:14:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:21:59.964 10:14:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.964 10:14:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:59.965 10:14:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:59.965 10:14:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:59.965 10:14:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.965 10:14:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:59.965 10:14:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:59.965 10:14:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:21:59.965 10:14:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:21:59.965 10:14:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:21:59.965 10:14:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:21:59.965 10:14:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:21:59.965 10:14:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:21:59.965 10:14:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:59.965 10:14:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:59.965 10:14:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:59.965 10:14:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:21:59.965 10:14:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:59.965 10:14:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:59.965 10:14:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:59.965 10:14:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:59.965 10:14:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:59.965 10:14:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:59.965 10:14:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:59.965 10:14:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:59.965 10:14:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:21:59.965 10:14:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:21:59.965 Cannot find device "nvmf_tgt_br" 00:21:59.965 10:14:30 -- nvmf/common.sh@154 -- # true 00:21:59.965 10:14:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:21:59.965 Cannot find device "nvmf_tgt_br2" 00:21:59.965 10:14:30 -- nvmf/common.sh@155 -- # true 00:21:59.965 10:14:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:21:59.965 10:14:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:21:59.965 Cannot find device "nvmf_tgt_br" 00:21:59.965 10:14:30 -- nvmf/common.sh@157 -- # true 00:21:59.965 10:14:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:21:59.965 Cannot find device "nvmf_tgt_br2" 00:21:59.965 10:14:30 -- nvmf/common.sh@158 -- # true 00:21:59.965 10:14:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:00.224 10:14:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:00.224 10:14:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:00.224 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:00.224 10:14:30 -- nvmf/common.sh@161 -- # true 00:22:00.224 10:14:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:00.224 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:00.224 10:14:30 -- nvmf/common.sh@162 -- # true 00:22:00.224 10:14:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:00.224 10:14:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:00.224 10:14:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:00.224 10:14:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:00.224 10:14:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:00.224 10:14:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:00.224 10:14:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:00.224 10:14:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:00.224 10:14:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:00.225 10:14:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:00.225 10:14:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:00.225 10:14:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:00.225 10:14:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:00.225 10:14:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:00.225 10:14:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:00.225 10:14:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:00.225 10:14:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:00.225 10:14:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:00.225 10:14:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:00.225 10:14:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:00.225 10:14:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:00.225 10:14:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:00.225 10:14:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:00.225 10:14:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:00.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:00.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:22:00.225 00:22:00.225 --- 10.0.0.2 ping statistics --- 00:22:00.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.225 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:22:00.225 10:14:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:00.225 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:00.225 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:22:00.225 00:22:00.225 --- 10.0.0.3 ping statistics --- 00:22:00.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.225 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:22:00.225 10:14:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:00.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:00.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:22:00.225 00:22:00.225 --- 10.0.0.1 ping statistics --- 00:22:00.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:00.225 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:22:00.225 10:14:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:00.225 10:14:30 -- nvmf/common.sh@421 -- # return 0 00:22:00.225 10:14:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:00.225 10:14:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:00.225 10:14:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:00.225 10:14:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:00.225 10:14:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:00.225 10:14:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:00.225 10:14:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:00.225 10:14:30 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:00.225 10:14:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:00.225 10:14:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:00.225 10:14:30 -- common/autotest_common.sh@10 -- # set +x 00:22:00.225 10:14:30 -- nvmf/common.sh@469 -- # nvmfpid=96092 00:22:00.225 10:14:30 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:00.225 10:14:30 -- nvmf/common.sh@470 -- # waitforlisten 96092 00:22:00.225 10:14:30 -- common/autotest_common.sh@819 -- # '[' -z 96092 ']' 00:22:00.225 10:14:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.225 10:14:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:00.225 10:14:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.225 10:14:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:00.225 10:14:30 -- common/autotest_common.sh@10 -- # set +x 00:22:00.483 [2024-07-14 10:14:30.815044] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:00.483 [2024-07-14 10:14:30.815148] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.483 [2024-07-14 10:14:30.958265] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.741 [2024-07-14 10:14:31.049901] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:00.741 [2024-07-14 10:14:31.050082] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.741 [2024-07-14 10:14:31.050097] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.741 [2024-07-14 10:14:31.050105] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.741 [2024-07-14 10:14:31.050131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.306 10:14:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:01.306 10:14:31 -- common/autotest_common.sh@852 -- # return 0 00:22:01.306 10:14:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:01.306 10:14:31 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:01.306 10:14:31 -- common/autotest_common.sh@10 -- # set +x 00:22:01.306 10:14:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.306 10:14:31 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:01.306 10:14:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:01.306 10:14:31 -- common/autotest_common.sh@10 -- # set +x 00:22:01.306 [2024-07-14 10:14:31.837482] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.306 [2024-07-14 10:14:31.845643] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:01.563 null0 00:22:01.563 [2024-07-14 10:14:31.877566] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.563 10:14:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:01.563 10:14:31 -- host/discovery_remove_ifc.sh@59 -- # hostpid=96142 00:22:01.563 10:14:31 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:01.563 10:14:31 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 96142 /tmp/host.sock 00:22:01.563 10:14:31 -- common/autotest_common.sh@819 -- # '[' -z 96142 ']' 00:22:01.563 10:14:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:22:01.563 10:14:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:01.563 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:01.563 10:14:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:01.563 10:14:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:01.563 10:14:31 -- common/autotest_common.sh@10 -- # set +x 00:22:01.563 [2024-07-14 10:14:31.956969] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:01.563 [2024-07-14 10:14:31.957098] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96142 ] 00:22:01.563 [2024-07-14 10:14:32.098192] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.821 [2024-07-14 10:14:32.187176] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:01.821 [2024-07-14 10:14:32.187396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.387 10:14:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:02.387 10:14:32 -- common/autotest_common.sh@852 -- # return 0 00:22:02.387 10:14:32 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:02.387 10:14:32 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:02.387 10:14:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:02.387 10:14:32 -- common/autotest_common.sh@10 -- # set +x 00:22:02.387 10:14:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:02.387 10:14:32 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:02.387 10:14:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:02.387 10:14:32 -- common/autotest_common.sh@10 -- # set +x 00:22:02.646 10:14:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:02.646 10:14:33 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:02.646 10:14:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:02.646 10:14:33 -- common/autotest_common.sh@10 -- # set +x 00:22:03.583 [2024-07-14 10:14:34.020588] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:03.583 [2024-07-14 10:14:34.020638] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:03.583 [2024-07-14 10:14:34.020656] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:03.583 [2024-07-14 10:14:34.106717] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:03.842 [2024-07-14 10:14:34.162508] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:03.842 [2024-07-14 10:14:34.162576] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:03.842 [2024-07-14 10:14:34.162620] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:03.842 [2024-07-14 10:14:34.162636] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:03.842 [2024-07-14 10:14:34.162661] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:03.842 10:14:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:03.842 10:14:34 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:03.842 10:14:34 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:03.842 10:14:34 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:03.842 10:14:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:03.842 10:14:34 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:03.842 [2024-07-14 10:14:34.169135] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x54f530 was disconnected and freed. delete nvme_qpair. 00:22:03.842 10:14:34 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:03.842 10:14:34 -- common/autotest_common.sh@10 -- # set +x 00:22:03.842 10:14:34 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:03.842 10:14:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:03.842 10:14:34 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:03.842 10:14:34 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:22:03.842 10:14:34 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:22:03.842 10:14:34 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:03.842 10:14:34 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:03.842 10:14:34 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:03.842 10:14:34 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:03.842 10:14:34 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:03.842 10:14:34 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:03.842 10:14:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:03.842 10:14:34 -- common/autotest_common.sh@10 -- # set +x 00:22:03.842 10:14:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:03.842 10:14:34 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:03.842 10:14:34 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:04.777 10:14:35 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:04.777 10:14:35 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:04.777 10:14:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:04.777 10:14:35 -- common/autotest_common.sh@10 -- # set +x 00:22:04.777 10:14:35 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:04.777 10:14:35 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:04.777 10:14:35 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:04.777 10:14:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:05.036 10:14:35 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:05.036 10:14:35 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:05.973 10:14:36 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:05.973 10:14:36 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:05.973 10:14:36 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:05.973 10:14:36 -- common/autotest_common.sh@10 -- # set +x 00:22:05.973 10:14:36 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:05.973 10:14:36 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:05.973 10:14:36 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:05.973 10:14:36 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:05.973 10:14:36 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:05.973 10:14:36 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:06.910 10:14:37 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:06.910 10:14:37 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:06.910 10:14:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:06.910 10:14:37 -- common/autotest_common.sh@10 -- # set +x 00:22:06.910 10:14:37 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:06.910 10:14:37 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:06.910 10:14:37 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:06.910 10:14:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:07.169 10:14:37 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:07.169 10:14:37 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:08.106 10:14:38 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:08.106 10:14:38 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:08.106 10:14:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:08.106 10:14:38 -- common/autotest_common.sh@10 -- # set +x 00:22:08.106 10:14:38 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:08.106 10:14:38 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:08.106 10:14:38 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:08.106 10:14:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:08.106 10:14:38 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:08.106 10:14:38 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:09.046 10:14:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:09.046 10:14:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:09.046 10:14:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:09.046 10:14:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:09.046 10:14:39 -- common/autotest_common.sh@10 -- # set +x 00:22:09.046 10:14:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:09.046 10:14:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:09.046 10:14:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:09.323 [2024-07-14 10:14:39.590510] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:09.323 [2024-07-14 10:14:39.590604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.323 [2024-07-14 10:14:39.590619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.323 [2024-07-14 10:14:39.590631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.323 [2024-07-14 10:14:39.590640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.323 [2024-07-14 10:14:39.590649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.323 [2024-07-14 10:14:39.590658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.323 [2024-07-14 10:14:39.590667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.323 [2024-07-14 10:14:39.590676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.323 [2024-07-14 10:14:39.590685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:09.323 [2024-07-14 10:14:39.590693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:09.323 [2024-07-14 10:14:39.590702] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x515c50 is same with the state(5) to be set 00:22:09.323 10:14:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:09.323 [2024-07-14 10:14:39.600507] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x515c50 (9): Bad file descriptor 00:22:09.323 10:14:39 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:09.323 [2024-07-14 10:14:39.610525] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:10.299 10:14:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:10.299 10:14:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:10.299 10:14:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:10.299 10:14:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:10.299 10:14:40 -- common/autotest_common.sh@10 -- # set +x 00:22:10.299 10:14:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:10.299 10:14:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:10.299 [2024-07-14 10:14:40.673351] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:11.231 [2024-07-14 10:14:41.697340] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:11.231 [2024-07-14 10:14:41.697466] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x515c50 with addr=10.0.0.2, port=4420 00:22:11.231 [2024-07-14 10:14:41.697503] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x515c50 is same with the state(5) to be set 00:22:11.231 [2024-07-14 10:14:41.697560] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:11.231 [2024-07-14 10:14:41.697584] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:11.231 [2024-07-14 10:14:41.697603] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:11.231 [2024-07-14 10:14:41.697625] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:11.231 [2024-07-14 10:14:41.698516] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x515c50 (9): Bad file descriptor 00:22:11.231 [2024-07-14 10:14:41.698581] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:11.231 [2024-07-14 10:14:41.698644] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:11.231 [2024-07-14 10:14:41.698712] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.231 [2024-07-14 10:14:41.698743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.231 [2024-07-14 10:14:41.698771] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.231 [2024-07-14 10:14:41.698793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.231 [2024-07-14 10:14:41.698857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.231 [2024-07-14 10:14:41.698879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.231 [2024-07-14 10:14:41.698902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.231 [2024-07-14 10:14:41.698923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.231 [2024-07-14 10:14:41.698946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:11.231 [2024-07-14 10:14:41.698967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:11.231 [2024-07-14 10:14:41.698989] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:11.231 [2024-07-14 10:14:41.699022] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x516060 (9): Bad file descriptor 00:22:11.231 [2024-07-14 10:14:41.699610] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:11.231 [2024-07-14 10:14:41.699644] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:11.231 10:14:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:11.231 10:14:41 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:11.231 10:14:41 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:12.603 10:14:42 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:12.603 10:14:42 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:12.603 10:14:42 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:12.603 10:14:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:12.603 10:14:42 -- common/autotest_common.sh@10 -- # set +x 00:22:12.603 10:14:42 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:12.603 10:14:42 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:12.603 10:14:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:12.603 10:14:42 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:12.603 10:14:42 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:12.603 10:14:42 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:12.603 10:14:42 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:12.603 10:14:42 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:12.603 10:14:42 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:12.603 10:14:42 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:12.603 10:14:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:12.603 10:14:42 -- common/autotest_common.sh@10 -- # set +x 00:22:12.603 10:14:42 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:12.603 10:14:42 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:12.603 10:14:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:12.603 10:14:42 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:12.603 10:14:42 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:13.168 [2024-07-14 10:14:43.712062] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:13.168 [2024-07-14 10:14:43.712088] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:13.168 [2024-07-14 10:14:43.712103] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:13.427 [2024-07-14 10:14:43.798157] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:13.427 [2024-07-14 10:14:43.853077] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:13.427 [2024-07-14 10:14:43.853117] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:13.427 [2024-07-14 10:14:43.853139] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:13.427 [2024-07-14 10:14:43.853153] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:13.427 [2024-07-14 10:14:43.853161] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:13.427 [2024-07-14 10:14:43.860557] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x522b00 was disconnected and freed. delete nvme_qpair. 00:22:13.427 10:14:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:13.427 10:14:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:13.427 10:14:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:13.427 10:14:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:13.427 10:14:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:13.427 10:14:43 -- common/autotest_common.sh@10 -- # set +x 00:22:13.427 10:14:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:13.427 10:14:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:13.427 10:14:43 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:13.427 10:14:43 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:13.427 10:14:43 -- host/discovery_remove_ifc.sh@90 -- # killprocess 96142 00:22:13.427 10:14:43 -- common/autotest_common.sh@926 -- # '[' -z 96142 ']' 00:22:13.427 10:14:43 -- common/autotest_common.sh@930 -- # kill -0 96142 00:22:13.427 10:14:43 -- common/autotest_common.sh@931 -- # uname 00:22:13.427 10:14:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:13.427 10:14:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96142 00:22:13.427 10:14:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:13.427 10:14:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:13.427 killing process with pid 96142 00:22:13.427 10:14:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96142' 00:22:13.427 10:14:43 -- common/autotest_common.sh@945 -- # kill 96142 00:22:13.427 10:14:43 -- common/autotest_common.sh@950 -- # wait 96142 00:22:13.686 10:14:44 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:13.686 10:14:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:13.686 10:14:44 -- nvmf/common.sh@116 -- # sync 00:22:13.686 10:14:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:13.686 10:14:44 -- nvmf/common.sh@119 -- # set +e 00:22:13.686 10:14:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:13.686 10:14:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:13.686 rmmod nvme_tcp 00:22:13.686 rmmod nvme_fabrics 00:22:13.686 rmmod nvme_keyring 00:22:13.686 10:14:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:13.686 10:14:44 -- nvmf/common.sh@123 -- # set -e 00:22:13.686 10:14:44 -- nvmf/common.sh@124 -- # return 0 00:22:13.686 10:14:44 -- nvmf/common.sh@477 -- # '[' -n 96092 ']' 00:22:13.686 10:14:44 -- nvmf/common.sh@478 -- # killprocess 96092 00:22:13.686 10:14:44 -- common/autotest_common.sh@926 -- # '[' -z 96092 ']' 00:22:13.686 10:14:44 -- common/autotest_common.sh@930 -- # kill -0 96092 00:22:13.686 10:14:44 -- common/autotest_common.sh@931 -- # uname 00:22:13.686 10:14:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:13.686 10:14:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96092 00:22:13.686 killing process with pid 96092 00:22:13.686 10:14:44 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:13.686 10:14:44 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:13.686 10:14:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96092' 00:22:13.686 10:14:44 -- common/autotest_common.sh@945 -- # kill 96092 00:22:13.686 10:14:44 -- common/autotest_common.sh@950 -- # wait 96092 00:22:13.945 10:14:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:13.945 10:14:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:13.945 10:14:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:13.945 10:14:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:13.945 10:14:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:13.945 10:14:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:13.945 10:14:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:13.945 10:14:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:13.945 10:14:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:13.945 ************************************ 00:22:13.945 END TEST nvmf_discovery_remove_ifc 00:22:13.945 ************************************ 00:22:13.945 00:22:13.945 real 0m14.148s 00:22:13.945 user 0m24.290s 00:22:13.945 sys 0m1.582s 00:22:13.945 10:14:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:13.945 10:14:44 -- common/autotest_common.sh@10 -- # set +x 00:22:14.204 10:14:44 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:22:14.204 10:14:44 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:14.204 10:14:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:14.204 10:14:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:14.204 10:14:44 -- common/autotest_common.sh@10 -- # set +x 00:22:14.204 ************************************ 00:22:14.204 START TEST nvmf_digest 00:22:14.204 ************************************ 00:22:14.204 10:14:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:22:14.204 * Looking for test storage... 00:22:14.204 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:14.204 10:14:44 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:14.204 10:14:44 -- nvmf/common.sh@7 -- # uname -s 00:22:14.204 10:14:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:14.204 10:14:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:14.204 10:14:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:14.204 10:14:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:14.204 10:14:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:14.204 10:14:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:14.204 10:14:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:14.204 10:14:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:14.204 10:14:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:14.204 10:14:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:14.204 10:14:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:22:14.204 10:14:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:22:14.204 10:14:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:14.204 10:14:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:14.204 10:14:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:14.204 10:14:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:14.204 10:14:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:14.204 10:14:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:14.204 10:14:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:14.204 10:14:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.204 10:14:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.204 10:14:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.204 10:14:44 -- paths/export.sh@5 -- # export PATH 00:22:14.204 10:14:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:14.204 10:14:44 -- nvmf/common.sh@46 -- # : 0 00:22:14.204 10:14:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:14.204 10:14:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:14.204 10:14:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:14.204 10:14:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:14.204 10:14:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:14.204 10:14:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:14.204 10:14:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:14.204 10:14:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:14.204 10:14:44 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:22:14.204 10:14:44 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:22:14.204 10:14:44 -- host/digest.sh@16 -- # runtime=2 00:22:14.204 10:14:44 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:22:14.204 10:14:44 -- host/digest.sh@132 -- # nvmftestinit 00:22:14.204 10:14:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:14.204 10:14:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:14.205 10:14:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:14.205 10:14:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:14.205 10:14:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:14.205 10:14:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.205 10:14:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:14.205 10:14:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:14.205 10:14:44 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:14.205 10:14:44 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:14.205 10:14:44 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:14.205 10:14:44 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:14.205 10:14:44 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:14.205 10:14:44 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:14.205 10:14:44 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:14.205 10:14:44 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:14.205 10:14:44 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:14.205 10:14:44 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:14.205 10:14:44 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:14.205 10:14:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:14.205 10:14:44 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:14.205 10:14:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:14.205 10:14:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:14.205 10:14:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:14.205 10:14:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:14.205 10:14:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:14.205 10:14:44 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:14.205 10:14:44 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:14.205 Cannot find device "nvmf_tgt_br" 00:22:14.205 10:14:44 -- nvmf/common.sh@154 -- # true 00:22:14.205 10:14:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:14.205 Cannot find device "nvmf_tgt_br2" 00:22:14.205 10:14:44 -- nvmf/common.sh@155 -- # true 00:22:14.205 10:14:44 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:14.205 10:14:44 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:14.205 Cannot find device "nvmf_tgt_br" 00:22:14.205 10:14:44 -- nvmf/common.sh@157 -- # true 00:22:14.205 10:14:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:14.205 Cannot find device "nvmf_tgt_br2" 00:22:14.205 10:14:44 -- nvmf/common.sh@158 -- # true 00:22:14.205 10:14:44 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:14.205 10:14:44 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:14.205 10:14:44 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:14.205 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:14.205 10:14:44 -- nvmf/common.sh@161 -- # true 00:22:14.205 10:14:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:14.205 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:14.205 10:14:44 -- nvmf/common.sh@162 -- # true 00:22:14.205 10:14:44 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:14.205 10:14:44 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:14.205 10:14:44 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:14.463 10:14:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:14.463 10:14:44 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:14.463 10:14:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:14.463 10:14:44 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:14.463 10:14:44 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:14.463 10:14:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:14.463 10:14:44 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:14.463 10:14:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:14.463 10:14:44 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:14.463 10:14:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:14.463 10:14:44 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:14.463 10:14:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:14.463 10:14:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:14.463 10:14:44 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:14.463 10:14:44 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:14.463 10:14:44 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:14.463 10:14:44 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:14.463 10:14:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:14.463 10:14:44 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:14.463 10:14:44 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:14.463 10:14:44 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:14.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:14.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:22:14.463 00:22:14.463 --- 10.0.0.2 ping statistics --- 00:22:14.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.463 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:22:14.463 10:14:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:14.463 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:14.464 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.035 ms 00:22:14.464 00:22:14.464 --- 10.0.0.3 ping statistics --- 00:22:14.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.464 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:22:14.464 10:14:44 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:14.464 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:14.464 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:22:14.464 00:22:14.464 --- 10.0.0.1 ping statistics --- 00:22:14.464 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:14.464 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:22:14.464 10:14:44 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:14.464 10:14:44 -- nvmf/common.sh@421 -- # return 0 00:22:14.464 10:14:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:14.464 10:14:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:14.464 10:14:44 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:14.464 10:14:44 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:14.464 10:14:44 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:14.464 10:14:44 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:14.464 10:14:44 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:14.464 10:14:44 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:14.464 10:14:44 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:22:14.464 10:14:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:14.464 10:14:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:14.464 10:14:44 -- common/autotest_common.sh@10 -- # set +x 00:22:14.464 ************************************ 00:22:14.464 START TEST nvmf_digest_clean 00:22:14.464 ************************************ 00:22:14.464 10:14:44 -- common/autotest_common.sh@1104 -- # run_digest 00:22:14.464 10:14:44 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:22:14.464 10:14:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:14.464 10:14:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:14.464 10:14:44 -- common/autotest_common.sh@10 -- # set +x 00:22:14.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:14.464 10:14:44 -- nvmf/common.sh@469 -- # nvmfpid=96551 00:22:14.464 10:14:44 -- nvmf/common.sh@470 -- # waitforlisten 96551 00:22:14.464 10:14:44 -- common/autotest_common.sh@819 -- # '[' -z 96551 ']' 00:22:14.464 10:14:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:14.464 10:14:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:14.464 10:14:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:14.464 10:14:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:14.464 10:14:44 -- common/autotest_common.sh@10 -- # set +x 00:22:14.464 10:14:44 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:14.722 [2024-07-14 10:14:45.018698] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:14.722 [2024-07-14 10:14:45.018786] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:14.722 [2024-07-14 10:14:45.157965] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.722 [2024-07-14 10:14:45.224044] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:14.722 [2024-07-14 10:14:45.224182] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:14.722 [2024-07-14 10:14:45.224196] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:14.722 [2024-07-14 10:14:45.224205] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:14.722 [2024-07-14 10:14:45.224279] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:15.658 10:14:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:15.658 10:14:46 -- common/autotest_common.sh@852 -- # return 0 00:22:15.658 10:14:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:15.658 10:14:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:15.658 10:14:46 -- common/autotest_common.sh@10 -- # set +x 00:22:15.658 10:14:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.658 10:14:46 -- host/digest.sh@120 -- # common_target_config 00:22:15.658 10:14:46 -- host/digest.sh@43 -- # rpc_cmd 00:22:15.658 10:14:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:15.658 10:14:46 -- common/autotest_common.sh@10 -- # set +x 00:22:15.658 null0 00:22:15.658 [2024-07-14 10:14:46.155664] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.658 [2024-07-14 10:14:46.179764] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:15.658 10:14:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:15.658 10:14:46 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:22:15.658 10:14:46 -- host/digest.sh@77 -- # local rw bs qd 00:22:15.658 10:14:46 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:15.658 10:14:46 -- host/digest.sh@80 -- # rw=randread 00:22:15.658 10:14:46 -- host/digest.sh@80 -- # bs=4096 00:22:15.658 10:14:46 -- host/digest.sh@80 -- # qd=128 00:22:15.658 10:14:46 -- host/digest.sh@82 -- # bperfpid=96601 00:22:15.658 10:14:46 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:15.658 10:14:46 -- host/digest.sh@83 -- # waitforlisten 96601 /var/tmp/bperf.sock 00:22:15.658 10:14:46 -- common/autotest_common.sh@819 -- # '[' -z 96601 ']' 00:22:15.658 10:14:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:15.658 10:14:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:15.658 10:14:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:15.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:15.658 10:14:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:15.658 10:14:46 -- common/autotest_common.sh@10 -- # set +x 00:22:15.916 [2024-07-14 10:14:46.242270] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:15.916 [2024-07-14 10:14:46.242519] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96601 ] 00:22:15.916 [2024-07-14 10:14:46.389359] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.173 [2024-07-14 10:14:46.471484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:16.739 10:14:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:16.739 10:14:47 -- common/autotest_common.sh@852 -- # return 0 00:22:16.739 10:14:47 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:16.739 10:14:47 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:16.739 10:14:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:16.997 10:14:47 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:16.997 10:14:47 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:17.255 nvme0n1 00:22:17.255 10:14:47 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:17.255 10:14:47 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:17.512 Running I/O for 2 seconds... 00:22:19.412 00:22:19.412 Latency(us) 00:22:19.412 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.412 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:19.412 nvme0n1 : 2.00 21610.81 84.42 0.00 0.00 5918.04 2576.76 18826.71 00:22:19.412 =================================================================================================================== 00:22:19.412 Total : 21610.81 84.42 0.00 0.00 5918.04 2576.76 18826.71 00:22:19.412 0 00:22:19.412 10:14:49 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:19.412 10:14:49 -- host/digest.sh@92 -- # get_accel_stats 00:22:19.412 10:14:49 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:19.412 10:14:49 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:19.412 | select(.opcode=="crc32c") 00:22:19.412 | "\(.module_name) \(.executed)"' 00:22:19.412 10:14:49 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:19.670 10:14:50 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:19.670 10:14:50 -- host/digest.sh@93 -- # exp_module=software 00:22:19.670 10:14:50 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:19.670 10:14:50 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:19.670 10:14:50 -- host/digest.sh@97 -- # killprocess 96601 00:22:19.670 10:14:50 -- common/autotest_common.sh@926 -- # '[' -z 96601 ']' 00:22:19.670 10:14:50 -- common/autotest_common.sh@930 -- # kill -0 96601 00:22:19.670 10:14:50 -- common/autotest_common.sh@931 -- # uname 00:22:19.670 10:14:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:19.670 10:14:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96601 00:22:19.670 10:14:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:19.670 10:14:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:19.670 killing process with pid 96601 00:22:19.670 10:14:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96601' 00:22:19.670 Received shutdown signal, test time was about 2.000000 seconds 00:22:19.670 00:22:19.670 Latency(us) 00:22:19.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:19.670 =================================================================================================================== 00:22:19.670 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:19.670 10:14:50 -- common/autotest_common.sh@945 -- # kill 96601 00:22:19.670 10:14:50 -- common/autotest_common.sh@950 -- # wait 96601 00:22:19.929 10:14:50 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:22:19.929 10:14:50 -- host/digest.sh@77 -- # local rw bs qd 00:22:19.929 10:14:50 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:19.929 10:14:50 -- host/digest.sh@80 -- # rw=randread 00:22:19.929 10:14:50 -- host/digest.sh@80 -- # bs=131072 00:22:19.929 10:14:50 -- host/digest.sh@80 -- # qd=16 00:22:19.929 10:14:50 -- host/digest.sh@82 -- # bperfpid=96687 00:22:19.929 10:14:50 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:19.929 10:14:50 -- host/digest.sh@83 -- # waitforlisten 96687 /var/tmp/bperf.sock 00:22:19.929 10:14:50 -- common/autotest_common.sh@819 -- # '[' -z 96687 ']' 00:22:19.929 10:14:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:19.929 10:14:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:19.929 10:14:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:19.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:19.929 10:14:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:19.929 10:14:50 -- common/autotest_common.sh@10 -- # set +x 00:22:19.929 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:19.929 Zero copy mechanism will not be used. 00:22:19.929 [2024-07-14 10:14:50.408634] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:19.929 [2024-07-14 10:14:50.408753] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96687 ] 00:22:20.187 [2024-07-14 10:14:50.543926] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.187 [2024-07-14 10:14:50.620820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:21.122 10:14:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:21.123 10:14:51 -- common/autotest_common.sh@852 -- # return 0 00:22:21.123 10:14:51 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:21.123 10:14:51 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:21.123 10:14:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:21.123 10:14:51 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:21.123 10:14:51 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:21.689 nvme0n1 00:22:21.689 10:14:51 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:21.689 10:14:51 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:21.689 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:21.689 Zero copy mechanism will not be used. 00:22:21.689 Running I/O for 2 seconds... 00:22:23.590 00:22:23.590 Latency(us) 00:22:23.590 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.590 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:23.590 nvme0n1 : 2.00 9779.50 1222.44 0.00 0.00 1633.24 441.25 5987.61 00:22:23.590 =================================================================================================================== 00:22:23.590 Total : 9779.50 1222.44 0.00 0.00 1633.24 441.25 5987.61 00:22:23.590 0 00:22:23.590 10:14:54 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:23.590 10:14:54 -- host/digest.sh@92 -- # get_accel_stats 00:22:23.590 10:14:54 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:23.590 10:14:54 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:23.590 | select(.opcode=="crc32c") 00:22:23.590 | "\(.module_name) \(.executed)"' 00:22:23.590 10:14:54 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:23.847 10:14:54 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:23.847 10:14:54 -- host/digest.sh@93 -- # exp_module=software 00:22:23.847 10:14:54 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:23.847 10:14:54 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:23.847 10:14:54 -- host/digest.sh@97 -- # killprocess 96687 00:22:23.847 10:14:54 -- common/autotest_common.sh@926 -- # '[' -z 96687 ']' 00:22:23.847 10:14:54 -- common/autotest_common.sh@930 -- # kill -0 96687 00:22:23.847 10:14:54 -- common/autotest_common.sh@931 -- # uname 00:22:23.847 10:14:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:23.847 10:14:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96687 00:22:23.847 10:14:54 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:23.847 10:14:54 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:23.847 killing process with pid 96687 00:22:23.847 10:14:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96687' 00:22:23.847 10:14:54 -- common/autotest_common.sh@945 -- # kill 96687 00:22:23.847 Received shutdown signal, test time was about 2.000000 seconds 00:22:23.847 00:22:23.847 Latency(us) 00:22:23.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:23.847 =================================================================================================================== 00:22:23.847 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:23.847 10:14:54 -- common/autotest_common.sh@950 -- # wait 96687 00:22:24.149 10:14:54 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:22:24.149 10:14:54 -- host/digest.sh@77 -- # local rw bs qd 00:22:24.149 10:14:54 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:24.149 10:14:54 -- host/digest.sh@80 -- # rw=randwrite 00:22:24.149 10:14:54 -- host/digest.sh@80 -- # bs=4096 00:22:24.149 10:14:54 -- host/digest.sh@80 -- # qd=128 00:22:24.149 10:14:54 -- host/digest.sh@82 -- # bperfpid=96776 00:22:24.149 10:14:54 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:22:24.149 10:14:54 -- host/digest.sh@83 -- # waitforlisten 96776 /var/tmp/bperf.sock 00:22:24.149 10:14:54 -- common/autotest_common.sh@819 -- # '[' -z 96776 ']' 00:22:24.149 10:14:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:24.149 10:14:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:24.149 10:14:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:24.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:24.149 10:14:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:24.149 10:14:54 -- common/autotest_common.sh@10 -- # set +x 00:22:24.149 [2024-07-14 10:14:54.569877] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:24.149 [2024-07-14 10:14:54.569962] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96776 ] 00:22:24.434 [2024-07-14 10:14:54.706696] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:24.434 [2024-07-14 10:14:54.785246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:25.000 10:14:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:25.000 10:14:55 -- common/autotest_common.sh@852 -- # return 0 00:22:25.000 10:14:55 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:25.000 10:14:55 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:25.000 10:14:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:25.258 10:14:55 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:25.258 10:14:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:25.823 nvme0n1 00:22:25.823 10:14:56 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:25.823 10:14:56 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:25.823 Running I/O for 2 seconds... 00:22:27.722 00:22:27.723 Latency(us) 00:22:27.723 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.723 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:27.723 nvme0n1 : 2.00 27125.89 105.96 0.00 0.00 4712.79 1936.29 10902.81 00:22:27.723 =================================================================================================================== 00:22:27.723 Total : 27125.89 105.96 0.00 0.00 4712.79 1936.29 10902.81 00:22:27.723 0 00:22:27.723 10:14:58 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:27.723 10:14:58 -- host/digest.sh@92 -- # get_accel_stats 00:22:27.723 10:14:58 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:27.723 10:14:58 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:27.723 | select(.opcode=="crc32c") 00:22:27.723 | "\(.module_name) \(.executed)"' 00:22:27.723 10:14:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:27.981 10:14:58 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:27.981 10:14:58 -- host/digest.sh@93 -- # exp_module=software 00:22:27.981 10:14:58 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:27.981 10:14:58 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:27.981 10:14:58 -- host/digest.sh@97 -- # killprocess 96776 00:22:27.981 10:14:58 -- common/autotest_common.sh@926 -- # '[' -z 96776 ']' 00:22:27.981 10:14:58 -- common/autotest_common.sh@930 -- # kill -0 96776 00:22:27.981 10:14:58 -- common/autotest_common.sh@931 -- # uname 00:22:27.981 10:14:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:27.981 10:14:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96776 00:22:28.239 10:14:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:28.239 10:14:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:28.239 10:14:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96776' 00:22:28.239 killing process with pid 96776 00:22:28.239 10:14:58 -- common/autotest_common.sh@945 -- # kill 96776 00:22:28.239 Received shutdown signal, test time was about 2.000000 seconds 00:22:28.239 00:22:28.239 Latency(us) 00:22:28.239 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:28.239 =================================================================================================================== 00:22:28.239 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:28.239 10:14:58 -- common/autotest_common.sh@950 -- # wait 96776 00:22:28.239 10:14:58 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:22:28.239 10:14:58 -- host/digest.sh@77 -- # local rw bs qd 00:22:28.239 10:14:58 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:22:28.239 10:14:58 -- host/digest.sh@80 -- # rw=randwrite 00:22:28.239 10:14:58 -- host/digest.sh@80 -- # bs=131072 00:22:28.239 10:14:58 -- host/digest.sh@80 -- # qd=16 00:22:28.239 10:14:58 -- host/digest.sh@82 -- # bperfpid=96862 00:22:28.239 10:14:58 -- host/digest.sh@83 -- # waitforlisten 96862 /var/tmp/bperf.sock 00:22:28.239 10:14:58 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:22:28.239 10:14:58 -- common/autotest_common.sh@819 -- # '[' -z 96862 ']' 00:22:28.239 10:14:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:28.239 10:14:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:28.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:28.239 10:14:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:28.239 10:14:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:28.239 10:14:58 -- common/autotest_common.sh@10 -- # set +x 00:22:28.239 [2024-07-14 10:14:58.780337] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:28.239 [2024-07-14 10:14:58.780442] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid96862 ] 00:22:28.239 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:28.239 Zero copy mechanism will not be used. 00:22:28.497 [2024-07-14 10:14:58.916136] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.497 [2024-07-14 10:14:58.988988] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:29.431 10:14:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:29.431 10:14:59 -- common/autotest_common.sh@852 -- # return 0 00:22:29.431 10:14:59 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:22:29.431 10:14:59 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:22:29.431 10:14:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:22:29.688 10:15:00 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:29.688 10:15:00 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:29.946 nvme0n1 00:22:29.946 10:15:00 -- host/digest.sh@91 -- # bperf_py perform_tests 00:22:29.946 10:15:00 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:29.946 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:29.946 Zero copy mechanism will not be used. 00:22:29.946 Running I/O for 2 seconds... 00:22:32.477 00:22:32.477 Latency(us) 00:22:32.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.477 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:32.477 nvme0n1 : 2.00 8383.96 1048.00 0.00 0.00 1903.86 1623.51 5570.56 00:22:32.477 =================================================================================================================== 00:22:32.477 Total : 8383.96 1048.00 0.00 0.00 1903.86 1623.51 5570.56 00:22:32.477 0 00:22:32.477 10:15:02 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:22:32.477 10:15:02 -- host/digest.sh@92 -- # get_accel_stats 00:22:32.477 10:15:02 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:22:32.477 10:15:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:22:32.477 10:15:02 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:22:32.477 | select(.opcode=="crc32c") 00:22:32.477 | "\(.module_name) \(.executed)"' 00:22:32.477 10:15:02 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:22:32.477 10:15:02 -- host/digest.sh@93 -- # exp_module=software 00:22:32.477 10:15:02 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:22:32.477 10:15:02 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:22:32.477 10:15:02 -- host/digest.sh@97 -- # killprocess 96862 00:22:32.477 10:15:02 -- common/autotest_common.sh@926 -- # '[' -z 96862 ']' 00:22:32.477 10:15:02 -- common/autotest_common.sh@930 -- # kill -0 96862 00:22:32.477 10:15:02 -- common/autotest_common.sh@931 -- # uname 00:22:32.477 10:15:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:32.477 10:15:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96862 00:22:32.477 killing process with pid 96862 00:22:32.477 Received shutdown signal, test time was about 2.000000 seconds 00:22:32.477 00:22:32.477 Latency(us) 00:22:32.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.477 =================================================================================================================== 00:22:32.477 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:32.477 10:15:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:32.477 10:15:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:32.477 10:15:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96862' 00:22:32.477 10:15:02 -- common/autotest_common.sh@945 -- # kill 96862 00:22:32.477 10:15:02 -- common/autotest_common.sh@950 -- # wait 96862 00:22:32.477 10:15:02 -- host/digest.sh@126 -- # killprocess 96551 00:22:32.477 10:15:02 -- common/autotest_common.sh@926 -- # '[' -z 96551 ']' 00:22:32.477 10:15:02 -- common/autotest_common.sh@930 -- # kill -0 96551 00:22:32.477 10:15:02 -- common/autotest_common.sh@931 -- # uname 00:22:32.478 10:15:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:32.478 10:15:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96551 00:22:32.478 killing process with pid 96551 00:22:32.478 10:15:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:32.478 10:15:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:32.478 10:15:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96551' 00:22:32.478 10:15:02 -- common/autotest_common.sh@945 -- # kill 96551 00:22:32.478 10:15:02 -- common/autotest_common.sh@950 -- # wait 96551 00:22:32.736 ************************************ 00:22:32.736 END TEST nvmf_digest_clean 00:22:32.736 ************************************ 00:22:32.736 00:22:32.736 real 0m18.218s 00:22:32.736 user 0m34.281s 00:22:32.736 sys 0m4.724s 00:22:32.736 10:15:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:32.736 10:15:03 -- common/autotest_common.sh@10 -- # set +x 00:22:32.736 10:15:03 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:22:32.736 10:15:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:22:32.736 10:15:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:32.736 10:15:03 -- common/autotest_common.sh@10 -- # set +x 00:22:32.736 ************************************ 00:22:32.736 START TEST nvmf_digest_error 00:22:32.736 ************************************ 00:22:32.736 10:15:03 -- common/autotest_common.sh@1104 -- # run_digest_error 00:22:32.736 10:15:03 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:22:32.736 10:15:03 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:32.736 10:15:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:32.736 10:15:03 -- common/autotest_common.sh@10 -- # set +x 00:22:32.736 10:15:03 -- nvmf/common.sh@469 -- # nvmfpid=96975 00:22:32.736 10:15:03 -- nvmf/common.sh@470 -- # waitforlisten 96975 00:22:32.736 10:15:03 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:32.737 10:15:03 -- common/autotest_common.sh@819 -- # '[' -z 96975 ']' 00:22:32.737 10:15:03 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:32.737 10:15:03 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:32.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:32.737 10:15:03 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:32.737 10:15:03 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:32.737 10:15:03 -- common/autotest_common.sh@10 -- # set +x 00:22:32.995 [2024-07-14 10:15:03.296637] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:32.995 [2024-07-14 10:15:03.296748] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:32.995 [2024-07-14 10:15:03.437014] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.995 [2024-07-14 10:15:03.508921] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:32.995 [2024-07-14 10:15:03.509079] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:32.995 [2024-07-14 10:15:03.509091] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:32.995 [2024-07-14 10:15:03.509099] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:32.995 [2024-07-14 10:15:03.509120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:33.927 10:15:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:33.927 10:15:04 -- common/autotest_common.sh@852 -- # return 0 00:22:33.927 10:15:04 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:33.927 10:15:04 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:33.927 10:15:04 -- common/autotest_common.sh@10 -- # set +x 00:22:33.927 10:15:04 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:33.927 10:15:04 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:22:33.927 10:15:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.927 10:15:04 -- common/autotest_common.sh@10 -- # set +x 00:22:33.927 [2024-07-14 10:15:04.301727] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:22:33.927 10:15:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.927 10:15:04 -- host/digest.sh@104 -- # common_target_config 00:22:33.927 10:15:04 -- host/digest.sh@43 -- # rpc_cmd 00:22:33.927 10:15:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:33.927 10:15:04 -- common/autotest_common.sh@10 -- # set +x 00:22:33.927 null0 00:22:33.927 [2024-07-14 10:15:04.408788] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:33.927 [2024-07-14 10:15:04.432901] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:33.927 10:15:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:33.927 10:15:04 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:22:33.927 10:15:04 -- host/digest.sh@54 -- # local rw bs qd 00:22:33.927 10:15:04 -- host/digest.sh@56 -- # rw=randread 00:22:33.927 10:15:04 -- host/digest.sh@56 -- # bs=4096 00:22:33.927 10:15:04 -- host/digest.sh@56 -- # qd=128 00:22:33.927 10:15:04 -- host/digest.sh@58 -- # bperfpid=97025 00:22:33.927 10:15:04 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:22:33.927 10:15:04 -- host/digest.sh@60 -- # waitforlisten 97025 /var/tmp/bperf.sock 00:22:33.927 10:15:04 -- common/autotest_common.sh@819 -- # '[' -z 97025 ']' 00:22:33.927 10:15:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:33.927 10:15:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:33.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:33.927 10:15:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:33.927 10:15:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:33.927 10:15:04 -- common/autotest_common.sh@10 -- # set +x 00:22:34.186 [2024-07-14 10:15:04.482241] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:34.186 [2024-07-14 10:15:04.482338] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97025 ] 00:22:34.186 [2024-07-14 10:15:04.617497] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.186 [2024-07-14 10:15:04.697086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.120 10:15:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:35.120 10:15:05 -- common/autotest_common.sh@852 -- # return 0 00:22:35.120 10:15:05 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:35.120 10:15:05 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:35.120 10:15:05 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:35.120 10:15:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:35.120 10:15:05 -- common/autotest_common.sh@10 -- # set +x 00:22:35.120 10:15:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:35.120 10:15:05 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:35.120 10:15:05 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:35.686 nvme0n1 00:22:35.686 10:15:05 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:35.686 10:15:05 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:35.686 10:15:05 -- common/autotest_common.sh@10 -- # set +x 00:22:35.686 10:15:05 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:35.686 10:15:05 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:35.686 10:15:05 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:35.686 Running I/O for 2 seconds... 00:22:35.686 [2024-07-14 10:15:06.073435] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.686 [2024-07-14 10:15:06.073503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:9550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.686 [2024-07-14 10:15:06.073537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.686 [2024-07-14 10:15:06.087290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.686 [2024-07-14 10:15:06.087364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.686 [2024-07-14 10:15:06.087394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.686 [2024-07-14 10:15:06.100571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.686 [2024-07-14 10:15:06.100628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.686 [2024-07-14 10:15:06.100658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.686 [2024-07-14 10:15:06.113788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.686 [2024-07-14 10:15:06.113844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:13407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.686 [2024-07-14 10:15:06.113874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.686 [2024-07-14 10:15:06.127942] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.686 [2024-07-14 10:15:06.128000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.686 [2024-07-14 10:15:06.128030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.686 [2024-07-14 10:15:06.141212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.686 [2024-07-14 10:15:06.141270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.686 [2024-07-14 10:15:06.141299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.686 [2024-07-14 10:15:06.154415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.686 [2024-07-14 10:15:06.154473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:15912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.686 [2024-07-14 10:15:06.154503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.686 [2024-07-14 10:15:06.163328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.686 [2024-07-14 10:15:06.163384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6599 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.686 [2024-07-14 10:15:06.163413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.686 [2024-07-14 10:15:06.176870] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.686 [2024-07-14 10:15:06.176928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.686 [2024-07-14 10:15:06.176957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.686 [2024-07-14 10:15:06.189742] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.686 [2024-07-14 10:15:06.189799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.686 [2024-07-14 10:15:06.189829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.686 [2024-07-14 10:15:06.202775] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.686 [2024-07-14 10:15:06.202854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.686 [2024-07-14 10:15:06.202883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.686 [2024-07-14 10:15:06.216470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.686 [2024-07-14 10:15:06.216527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.686 [2024-07-14 10:15:06.216556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.686 [2024-07-14 10:15:06.229591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.686 [2024-07-14 10:15:06.229647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:9446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.686 [2024-07-14 10:15:06.229676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.944 [2024-07-14 10:15:06.238729] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.944 [2024-07-14 10:15:06.238786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.944 [2024-07-14 10:15:06.238859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.944 [2024-07-14 10:15:06.251807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.944 [2024-07-14 10:15:06.251864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:19297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.944 [2024-07-14 10:15:06.251892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.944 [2024-07-14 10:15:06.264868] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.944 [2024-07-14 10:15:06.264926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.944 [2024-07-14 10:15:06.264955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.944 [2024-07-14 10:15:06.275067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.944 [2024-07-14 10:15:06.275139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.944 [2024-07-14 10:15:06.275152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.944 [2024-07-14 10:15:06.288688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.944 [2024-07-14 10:15:06.288746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.944 [2024-07-14 10:15:06.288774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.944 [2024-07-14 10:15:06.300841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.944 [2024-07-14 10:15:06.300897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.944 [2024-07-14 10:15:06.300926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.944 [2024-07-14 10:15:06.311185] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.944 [2024-07-14 10:15:06.311250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.944 [2024-07-14 10:15:06.311279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.944 [2024-07-14 10:15:06.323615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.944 [2024-07-14 10:15:06.323671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:21402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.944 [2024-07-14 10:15:06.323699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.944 [2024-07-14 10:15:06.336359] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.944 [2024-07-14 10:15:06.336410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.944 [2024-07-14 10:15:06.336440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.944 [2024-07-14 10:15:06.350428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.944 [2024-07-14 10:15:06.350484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.945 [2024-07-14 10:15:06.350514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.945 [2024-07-14 10:15:06.365207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.945 [2024-07-14 10:15:06.365309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.945 [2024-07-14 10:15:06.365324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.945 [2024-07-14 10:15:06.379864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.945 [2024-07-14 10:15:06.379918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.945 [2024-07-14 10:15:06.379948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.945 [2024-07-14 10:15:06.393381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.945 [2024-07-14 10:15:06.393455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:13519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.945 [2024-07-14 10:15:06.393484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.945 [2024-07-14 10:15:06.406783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.945 [2024-07-14 10:15:06.406863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.945 [2024-07-14 10:15:06.406892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.945 [2024-07-14 10:15:06.419837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.945 [2024-07-14 10:15:06.419893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:4102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.945 [2024-07-14 10:15:06.419921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.945 [2024-07-14 10:15:06.432821] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.945 [2024-07-14 10:15:06.432877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:2202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.945 [2024-07-14 10:15:06.432906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.945 [2024-07-14 10:15:06.446739] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.945 [2024-07-14 10:15:06.446795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:25033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.945 [2024-07-14 10:15:06.446848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.945 [2024-07-14 10:15:06.455767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.945 [2024-07-14 10:15:06.455823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.945 [2024-07-14 10:15:06.455852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.945 [2024-07-14 10:15:06.469291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.945 [2024-07-14 10:15:06.469347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.945 [2024-07-14 10:15:06.469375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.945 [2024-07-14 10:15:06.481126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.945 [2024-07-14 10:15:06.481184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5108 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.945 [2024-07-14 10:15:06.481228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:35.945 [2024-07-14 10:15:06.490507] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:35.945 [2024-07-14 10:15:06.490564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8484 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:35.945 [2024-07-14 10:15:06.490593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.203 [2024-07-14 10:15:06.500759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.203 [2024-07-14 10:15:06.500816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.203 [2024-07-14 10:15:06.500844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.203 [2024-07-14 10:15:06.510706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.203 [2024-07-14 10:15:06.510763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:17916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.203 [2024-07-14 10:15:06.510791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.203 [2024-07-14 10:15:06.523395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.203 [2024-07-14 10:15:06.523451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.203 [2024-07-14 10:15:06.523479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.203 [2024-07-14 10:15:06.533518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.203 [2024-07-14 10:15:06.533593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:20413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.203 [2024-07-14 10:15:06.533636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.203 [2024-07-14 10:15:06.546641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.203 [2024-07-14 10:15:06.546698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.203 [2024-07-14 10:15:06.546727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.203 [2024-07-14 10:15:06.558662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.203 [2024-07-14 10:15:06.558717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21288 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.203 [2024-07-14 10:15:06.558746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.203 [2024-07-14 10:15:06.570753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.203 [2024-07-14 10:15:06.570832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.203 [2024-07-14 10:15:06.570848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.203 [2024-07-14 10:15:06.582350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.203 [2024-07-14 10:15:06.582406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:15005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.203 [2024-07-14 10:15:06.582435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.203 [2024-07-14 10:15:06.595850] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.203 [2024-07-14 10:15:06.595908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:17320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.203 [2024-07-14 10:15:06.595938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.203 [2024-07-14 10:15:06.607462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.203 [2024-07-14 10:15:06.607519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.203 [2024-07-14 10:15:06.607549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.203 [2024-07-14 10:15:06.618794] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.203 [2024-07-14 10:15:06.618874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.203 [2024-07-14 10:15:06.618904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.203 [2024-07-14 10:15:06.628758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.203 [2024-07-14 10:15:06.628815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.203 [2024-07-14 10:15:06.628844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.203 [2024-07-14 10:15:06.642493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.203 [2024-07-14 10:15:06.642549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:23105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.203 [2024-07-14 10:15:06.642579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.203 [2024-07-14 10:15:06.655549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.203 [2024-07-14 10:15:06.655605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.203 [2024-07-14 10:15:06.655634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.203 [2024-07-14 10:15:06.669164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.203 [2024-07-14 10:15:06.669249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:16635 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.203 [2024-07-14 10:15:06.669264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.203 [2024-07-14 10:15:06.682577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.203 [2024-07-14 10:15:06.682635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:7766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.203 [2024-07-14 10:15:06.682664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.203 [2024-07-14 10:15:06.696861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.203 [2024-07-14 10:15:06.696919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.203 [2024-07-14 10:15:06.696947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.203 [2024-07-14 10:15:06.709844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.203 [2024-07-14 10:15:06.709901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:23978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.203 [2024-07-14 10:15:06.709930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.203 [2024-07-14 10:15:06.723684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.203 [2024-07-14 10:15:06.723741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.203 [2024-07-14 10:15:06.723770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.203 [2024-07-14 10:15:06.732619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.203 [2024-07-14 10:15:06.732675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.203 [2024-07-14 10:15:06.732704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.203 [2024-07-14 10:15:06.745718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.203 [2024-07-14 10:15:06.745775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.203 [2024-07-14 10:15:06.745804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.462 [2024-07-14 10:15:06.758504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.462 [2024-07-14 10:15:06.758561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-07-14 10:15:06.758591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.462 [2024-07-14 10:15:06.771476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.462 [2024-07-14 10:15:06.771534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:10527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-07-14 10:15:06.771562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.462 [2024-07-14 10:15:06.784343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.462 [2024-07-14 10:15:06.784401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:10346 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-07-14 10:15:06.784429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.462 [2024-07-14 10:15:06.797065] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.462 [2024-07-14 10:15:06.797122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-07-14 10:15:06.797151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.462 [2024-07-14 10:15:06.809672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.462 [2024-07-14 10:15:06.809730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-07-14 10:15:06.809743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.462 [2024-07-14 10:15:06.823212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.462 [2024-07-14 10:15:06.823278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:23364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-07-14 10:15:06.823306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.462 [2024-07-14 10:15:06.835530] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.462 [2024-07-14 10:15:06.835584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-07-14 10:15:06.835613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.462 [2024-07-14 10:15:06.848323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.462 [2024-07-14 10:15:06.848378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-07-14 10:15:06.848407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.462 [2024-07-14 10:15:06.861990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.462 [2024-07-14 10:15:06.862047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-07-14 10:15:06.862075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.462 [2024-07-14 10:15:06.874786] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.462 [2024-07-14 10:15:06.874864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-07-14 10:15:06.874892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.462 [2024-07-14 10:15:06.888138] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.462 [2024-07-14 10:15:06.888195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-07-14 10:15:06.888235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.462 [2024-07-14 10:15:06.897302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.462 [2024-07-14 10:15:06.897359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:21094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-07-14 10:15:06.897388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.462 [2024-07-14 10:15:06.909982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.462 [2024-07-14 10:15:06.910039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-07-14 10:15:06.910067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.462 [2024-07-14 10:15:06.923721] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.462 [2024-07-14 10:15:06.923778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-07-14 10:15:06.923806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.462 [2024-07-14 10:15:06.935678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.462 [2024-07-14 10:15:06.935735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:15827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-07-14 10:15:06.935764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.462 [2024-07-14 10:15:06.948969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.462 [2024-07-14 10:15:06.949028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-07-14 10:15:06.949057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.462 [2024-07-14 10:15:06.962047] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.462 [2024-07-14 10:15:06.962105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-07-14 10:15:06.962134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.462 [2024-07-14 10:15:06.974545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.462 [2024-07-14 10:15:06.974602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:10193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-07-14 10:15:06.974631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.462 [2024-07-14 10:15:06.987297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.462 [2024-07-14 10:15:06.987353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-07-14 10:15:06.987381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.462 [2024-07-14 10:15:06.999880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.462 [2024-07-14 10:15:06.999937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.462 [2024-07-14 10:15:06.999965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.721 [2024-07-14 10:15:07.013546] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.721 [2024-07-14 10:15:07.013604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.721 [2024-07-14 10:15:07.013648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.721 [2024-07-14 10:15:07.026433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.721 [2024-07-14 10:15:07.026488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.721 [2024-07-14 10:15:07.026517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.721 [2024-07-14 10:15:07.039365] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.721 [2024-07-14 10:15:07.039422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.721 [2024-07-14 10:15:07.039451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.721 [2024-07-14 10:15:07.048462] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.721 [2024-07-14 10:15:07.048517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.721 [2024-07-14 10:15:07.048546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.721 [2024-07-14 10:15:07.061228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.721 [2024-07-14 10:15:07.061284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:9370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.721 [2024-07-14 10:15:07.061313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.722 [2024-07-14 10:15:07.075165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.722 [2024-07-14 10:15:07.075248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.722 [2024-07-14 10:15:07.075262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.722 [2024-07-14 10:15:07.088779] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.722 [2024-07-14 10:15:07.088837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.722 [2024-07-14 10:15:07.088865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.722 [2024-07-14 10:15:07.100210] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.722 [2024-07-14 10:15:07.100266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.722 [2024-07-14 10:15:07.100295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.722 [2024-07-14 10:15:07.110336] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.722 [2024-07-14 10:15:07.110393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.722 [2024-07-14 10:15:07.110422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.722 [2024-07-14 10:15:07.120010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.722 [2024-07-14 10:15:07.120067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.722 [2024-07-14 10:15:07.120096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.722 [2024-07-14 10:15:07.130184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.722 [2024-07-14 10:15:07.130266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.722 [2024-07-14 10:15:07.130295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.722 [2024-07-14 10:15:07.143511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.722 [2024-07-14 10:15:07.143568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.722 [2024-07-14 10:15:07.143596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.722 [2024-07-14 10:15:07.156773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.722 [2024-07-14 10:15:07.156828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.722 [2024-07-14 10:15:07.156857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.722 [2024-07-14 10:15:07.169858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.722 [2024-07-14 10:15:07.169914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23114 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.722 [2024-07-14 10:15:07.169943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.722 [2024-07-14 10:15:07.182655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.722 [2024-07-14 10:15:07.182713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:23423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.722 [2024-07-14 10:15:07.182742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.722 [2024-07-14 10:15:07.196032] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.722 [2024-07-14 10:15:07.196090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.722 [2024-07-14 10:15:07.196119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.722 [2024-07-14 10:15:07.209684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.722 [2024-07-14 10:15:07.209757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:21404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.722 [2024-07-14 10:15:07.209771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.722 [2024-07-14 10:15:07.222087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.722 [2024-07-14 10:15:07.222144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:3442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.722 [2024-07-14 10:15:07.222172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.722 [2024-07-14 10:15:07.234759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.722 [2024-07-14 10:15:07.234838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.722 [2024-07-14 10:15:07.234867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.722 [2024-07-14 10:15:07.247271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.722 [2024-07-14 10:15:07.247327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.722 [2024-07-14 10:15:07.247355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.722 [2024-07-14 10:15:07.260305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.722 [2024-07-14 10:15:07.260361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:19449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.722 [2024-07-14 10:15:07.260390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.981 [2024-07-14 10:15:07.269247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.981 [2024-07-14 10:15:07.269302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:13673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.981 [2024-07-14 10:15:07.269332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.981 [2024-07-14 10:15:07.283100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.981 [2024-07-14 10:15:07.283172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.981 [2024-07-14 10:15:07.283201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.981 [2024-07-14 10:15:07.294998] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.981 [2024-07-14 10:15:07.295056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:10881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.981 [2024-07-14 10:15:07.295084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.981 [2024-07-14 10:15:07.305800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.981 [2024-07-14 10:15:07.305855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.981 [2024-07-14 10:15:07.305884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.981 [2024-07-14 10:15:07.315658] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.981 [2024-07-14 10:15:07.315715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:25388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.981 [2024-07-14 10:15:07.315743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.981 [2024-07-14 10:15:07.327362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.981 [2024-07-14 10:15:07.327418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.981 [2024-07-14 10:15:07.327446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.981 [2024-07-14 10:15:07.337473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.981 [2024-07-14 10:15:07.337530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:12438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.981 [2024-07-14 10:15:07.337558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.981 [2024-07-14 10:15:07.347990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.981 [2024-07-14 10:15:07.348048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:21002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.981 [2024-07-14 10:15:07.348077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.981 [2024-07-14 10:15:07.358565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.981 [2024-07-14 10:15:07.358639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.981 [2024-07-14 10:15:07.358652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.981 [2024-07-14 10:15:07.368406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.981 [2024-07-14 10:15:07.368464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:2696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.981 [2024-07-14 10:15:07.368494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.981 [2024-07-14 10:15:07.382383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.981 [2024-07-14 10:15:07.382425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.981 [2024-07-14 10:15:07.382455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.981 [2024-07-14 10:15:07.393315] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.981 [2024-07-14 10:15:07.393356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.981 [2024-07-14 10:15:07.393386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.981 [2024-07-14 10:15:07.407446] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.981 [2024-07-14 10:15:07.407502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:19338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.981 [2024-07-14 10:15:07.407531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.981 [2024-07-14 10:15:07.419897] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.981 [2024-07-14 10:15:07.419952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.981 [2024-07-14 10:15:07.419981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.981 [2024-07-14 10:15:07.432969] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.981 [2024-07-14 10:15:07.433027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.981 [2024-07-14 10:15:07.433055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.981 [2024-07-14 10:15:07.443904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.981 [2024-07-14 10:15:07.443961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:22827 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.981 [2024-07-14 10:15:07.443989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.981 [2024-07-14 10:15:07.457577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.981 [2024-07-14 10:15:07.457650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.981 [2024-07-14 10:15:07.457679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.981 [2024-07-14 10:15:07.469939] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.981 [2024-07-14 10:15:07.469996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:9262 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.981 [2024-07-14 10:15:07.470025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.981 [2024-07-14 10:15:07.479472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.981 [2024-07-14 10:15:07.479527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:25516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.981 [2024-07-14 10:15:07.479555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.981 [2024-07-14 10:15:07.490549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.981 [2024-07-14 10:15:07.490606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:19792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.981 [2024-07-14 10:15:07.490634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.981 [2024-07-14 10:15:07.500364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.981 [2024-07-14 10:15:07.500421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:18505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.981 [2024-07-14 10:15:07.500449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.981 [2024-07-14 10:15:07.510367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.981 [2024-07-14 10:15:07.510422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.981 [2024-07-14 10:15:07.510451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:36.981 [2024-07-14 10:15:07.520594] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:36.981 [2024-07-14 10:15:07.520650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:9002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:36.981 [2024-07-14 10:15:07.520678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.241 [2024-07-14 10:15:07.531342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.241 [2024-07-14 10:15:07.531397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.241 [2024-07-14 10:15:07.531426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.241 [2024-07-14 10:15:07.541497] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.241 [2024-07-14 10:15:07.541554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.241 [2024-07-14 10:15:07.541583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.241 [2024-07-14 10:15:07.551208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.241 [2024-07-14 10:15:07.551273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:12559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.241 [2024-07-14 10:15:07.551301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.241 [2024-07-14 10:15:07.567470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.241 [2024-07-14 10:15:07.567524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.241 [2024-07-14 10:15:07.567552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.241 [2024-07-14 10:15:07.576972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.241 [2024-07-14 10:15:07.577026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.241 [2024-07-14 10:15:07.577054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.241 [2024-07-14 10:15:07.587353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.241 [2024-07-14 10:15:07.587392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.241 [2024-07-14 10:15:07.587422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.241 [2024-07-14 10:15:07.597708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.241 [2024-07-14 10:15:07.597764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.241 [2024-07-14 10:15:07.597793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.241 [2024-07-14 10:15:07.611745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.241 [2024-07-14 10:15:07.611802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14458 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.241 [2024-07-14 10:15:07.611831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.241 [2024-07-14 10:15:07.625564] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.241 [2024-07-14 10:15:07.625637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.241 [2024-07-14 10:15:07.625665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.241 [2024-07-14 10:15:07.638414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.241 [2024-07-14 10:15:07.638470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.241 [2024-07-14 10:15:07.638498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.241 [2024-07-14 10:15:07.652127] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.241 [2024-07-14 10:15:07.652184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:19529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.241 [2024-07-14 10:15:07.652223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.241 [2024-07-14 10:15:07.660715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.241 [2024-07-14 10:15:07.660772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.241 [2024-07-14 10:15:07.660800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.241 [2024-07-14 10:15:07.673832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.241 [2024-07-14 10:15:07.673888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:17365 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.241 [2024-07-14 10:15:07.673916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.241 [2024-07-14 10:15:07.686489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.241 [2024-07-14 10:15:07.686544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.241 [2024-07-14 10:15:07.686573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.241 [2024-07-14 10:15:07.699901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.241 [2024-07-14 10:15:07.699959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:24913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.241 [2024-07-14 10:15:07.699987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.241 [2024-07-14 10:15:07.712296] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.241 [2024-07-14 10:15:07.712353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.241 [2024-07-14 10:15:07.712381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.241 [2024-07-14 10:15:07.725531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.241 [2024-07-14 10:15:07.725604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:22923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.241 [2024-07-14 10:15:07.725634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.241 [2024-07-14 10:15:07.741134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.241 [2024-07-14 10:15:07.741193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.241 [2024-07-14 10:15:07.741252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.241 [2024-07-14 10:15:07.755371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.242 [2024-07-14 10:15:07.755428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:2037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.242 [2024-07-14 10:15:07.755457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.242 [2024-07-14 10:15:07.768332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.242 [2024-07-14 10:15:07.768389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.242 [2024-07-14 10:15:07.768419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.242 [2024-07-14 10:15:07.778439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.242 [2024-07-14 10:15:07.778498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.242 [2024-07-14 10:15:07.778527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.500 [2024-07-14 10:15:07.791619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.500 [2024-07-14 10:15:07.791678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.500 [2024-07-14 10:15:07.791707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.500 [2024-07-14 10:15:07.804880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.500 [2024-07-14 10:15:07.804937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.500 [2024-07-14 10:15:07.804967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.500 [2024-07-14 10:15:07.817502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.500 [2024-07-14 10:15:07.817560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.500 [2024-07-14 10:15:07.817588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.500 [2024-07-14 10:15:07.831157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.501 [2024-07-14 10:15:07.831239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:1159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.501 [2024-07-14 10:15:07.831254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.501 [2024-07-14 10:15:07.844605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.501 [2024-07-14 10:15:07.844677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.501 [2024-07-14 10:15:07.844706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.501 [2024-07-14 10:15:07.858173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.501 [2024-07-14 10:15:07.858241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.501 [2024-07-14 10:15:07.858270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.501 [2024-07-14 10:15:07.871177] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.501 [2024-07-14 10:15:07.871241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:25506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.501 [2024-07-14 10:15:07.871270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.501 [2024-07-14 10:15:07.884503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.501 [2024-07-14 10:15:07.884561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.501 [2024-07-14 10:15:07.884590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.501 [2024-07-14 10:15:07.898847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.501 [2024-07-14 10:15:07.898906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.501 [2024-07-14 10:15:07.898935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.501 [2024-07-14 10:15:07.912141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.501 [2024-07-14 10:15:07.912226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:9681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.501 [2024-07-14 10:15:07.912257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.501 [2024-07-14 10:15:07.925449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.501 [2024-07-14 10:15:07.925506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10072 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.501 [2024-07-14 10:15:07.925535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.501 [2024-07-14 10:15:07.938521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.501 [2024-07-14 10:15:07.938577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:15712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.501 [2024-07-14 10:15:07.938605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.501 [2024-07-14 10:15:07.947707] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.501 [2024-07-14 10:15:07.947764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.501 [2024-07-14 10:15:07.947792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.501 [2024-07-14 10:15:07.960328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.501 [2024-07-14 10:15:07.960384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.501 [2024-07-14 10:15:07.960413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.501 [2024-07-14 10:15:07.973183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.501 [2024-07-14 10:15:07.973251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1855 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.501 [2024-07-14 10:15:07.973280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.501 [2024-07-14 10:15:07.986923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.501 [2024-07-14 10:15:07.986980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:24798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.501 [2024-07-14 10:15:07.987009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.501 [2024-07-14 10:15:07.999562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.501 [2024-07-14 10:15:07.999618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:22615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.501 [2024-07-14 10:15:07.999646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.501 [2024-07-14 10:15:08.012134] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.501 [2024-07-14 10:15:08.012192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:7306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.501 [2024-07-14 10:15:08.012248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.501 [2024-07-14 10:15:08.024770] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.501 [2024-07-14 10:15:08.024827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:398 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.501 [2024-07-14 10:15:08.024856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.501 [2024-07-14 10:15:08.038264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.501 [2024-07-14 10:15:08.038321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5233 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.501 [2024-07-14 10:15:08.038350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.759 [2024-07-14 10:15:08.050536] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd80580) 00:22:37.759 [2024-07-14 10:15:08.050591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:19562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:37.759 [2024-07-14 10:15:08.050619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:37.759 00:22:37.759 Latency(us) 00:22:37.759 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:37.759 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:22:37.759 nvme0n1 : 2.01 20575.77 80.37 0.00 0.00 6215.55 2532.07 20375.74 00:22:37.759 =================================================================================================================== 00:22:37.760 Total : 20575.77 80.37 0.00 0.00 6215.55 2532.07 20375.74 00:22:37.760 0 00:22:37.760 10:15:08 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:37.760 10:15:08 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:37.760 10:15:08 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:37.760 | .driver_specific 00:22:37.760 | .nvme_error 00:22:37.760 | .status_code 00:22:37.760 | .command_transient_transport_error' 00:22:37.760 10:15:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:38.018 10:15:08 -- host/digest.sh@71 -- # (( 161 > 0 )) 00:22:38.018 10:15:08 -- host/digest.sh@73 -- # killprocess 97025 00:22:38.018 10:15:08 -- common/autotest_common.sh@926 -- # '[' -z 97025 ']' 00:22:38.018 10:15:08 -- common/autotest_common.sh@930 -- # kill -0 97025 00:22:38.018 10:15:08 -- common/autotest_common.sh@931 -- # uname 00:22:38.018 10:15:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:38.018 10:15:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97025 00:22:38.018 10:15:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:38.018 10:15:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:38.018 killing process with pid 97025 00:22:38.018 10:15:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97025' 00:22:38.018 10:15:08 -- common/autotest_common.sh@945 -- # kill 97025 00:22:38.018 Received shutdown signal, test time was about 2.000000 seconds 00:22:38.018 00:22:38.018 Latency(us) 00:22:38.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.018 =================================================================================================================== 00:22:38.018 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:38.018 10:15:08 -- common/autotest_common.sh@950 -- # wait 97025 00:22:38.018 10:15:08 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:22:38.018 10:15:08 -- host/digest.sh@54 -- # local rw bs qd 00:22:38.018 10:15:08 -- host/digest.sh@56 -- # rw=randread 00:22:38.018 10:15:08 -- host/digest.sh@56 -- # bs=131072 00:22:38.018 10:15:08 -- host/digest.sh@56 -- # qd=16 00:22:38.018 10:15:08 -- host/digest.sh@58 -- # bperfpid=97110 00:22:38.018 10:15:08 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:22:38.018 10:15:08 -- host/digest.sh@60 -- # waitforlisten 97110 /var/tmp/bperf.sock 00:22:38.018 10:15:08 -- common/autotest_common.sh@819 -- # '[' -z 97110 ']' 00:22:38.018 10:15:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:38.018 10:15:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:38.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:38.018 10:15:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:38.018 10:15:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:38.018 10:15:08 -- common/autotest_common.sh@10 -- # set +x 00:22:38.276 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:38.276 Zero copy mechanism will not be used. 00:22:38.276 [2024-07-14 10:15:08.611754] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:38.276 [2024-07-14 10:15:08.611857] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97110 ] 00:22:38.276 [2024-07-14 10:15:08.751182] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.535 [2024-07-14 10:15:08.827891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:39.101 10:15:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:39.101 10:15:09 -- common/autotest_common.sh@852 -- # return 0 00:22:39.101 10:15:09 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:39.101 10:15:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:39.359 10:15:09 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:39.359 10:15:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.359 10:15:09 -- common/autotest_common.sh@10 -- # set +x 00:22:39.359 10:15:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.359 10:15:09 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:39.359 10:15:09 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:39.621 nvme0n1 00:22:39.621 10:15:10 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:39.621 10:15:10 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:39.621 10:15:10 -- common/autotest_common.sh@10 -- # set +x 00:22:39.621 10:15:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:39.621 10:15:10 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:39.621 10:15:10 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:39.895 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:39.895 Zero copy mechanism will not be used. 00:22:39.895 Running I/O for 2 seconds... 00:22:39.895 [2024-07-14 10:15:10.202664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.895 [2024-07-14 10:15:10.202730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.895 [2024-07-14 10:15:10.202763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.895 [2024-07-14 10:15:10.206983] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.895 [2024-07-14 10:15:10.207042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.895 [2024-07-14 10:15:10.207056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.895 [2024-07-14 10:15:10.210993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.895 [2024-07-14 10:15:10.211054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.895 [2024-07-14 10:15:10.211084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.895 [2024-07-14 10:15:10.214686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.895 [2024-07-14 10:15:10.214738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.895 [2024-07-14 10:15:10.214768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.895 [2024-07-14 10:15:10.218560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.895 [2024-07-14 10:15:10.218613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.895 [2024-07-14 10:15:10.218643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.895 [2024-07-14 10:15:10.222632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.895 [2024-07-14 10:15:10.222687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.895 [2024-07-14 10:15:10.222717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.895 [2024-07-14 10:15:10.225608] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.895 [2024-07-14 10:15:10.225663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.895 [2024-07-14 10:15:10.225693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.895 [2024-07-14 10:15:10.229590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.895 [2024-07-14 10:15:10.229642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.895 [2024-07-14 10:15:10.229671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.895 [2024-07-14 10:15:10.233029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.895 [2024-07-14 10:15:10.233085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.895 [2024-07-14 10:15:10.233114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.895 [2024-07-14 10:15:10.236606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.895 [2024-07-14 10:15:10.236663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.895 [2024-07-14 10:15:10.236692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.895 [2024-07-14 10:15:10.240324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.895 [2024-07-14 10:15:10.240379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.895 [2024-07-14 10:15:10.240409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.895 [2024-07-14 10:15:10.244121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.895 [2024-07-14 10:15:10.244158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.895 [2024-07-14 10:15:10.244187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.895 [2024-07-14 10:15:10.247552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.895 [2024-07-14 10:15:10.247587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.895 [2024-07-14 10:15:10.247617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.895 [2024-07-14 10:15:10.251169] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.895 [2024-07-14 10:15:10.251229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.895 [2024-07-14 10:15:10.251260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.895 [2024-07-14 10:15:10.255004] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.895 [2024-07-14 10:15:10.255044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.895 [2024-07-14 10:15:10.255058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.895 [2024-07-14 10:15:10.259277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.895 [2024-07-14 10:15:10.259316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.895 [2024-07-14 10:15:10.259330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.895 [2024-07-14 10:15:10.263036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.895 [2024-07-14 10:15:10.263087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.895 [2024-07-14 10:15:10.263133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.895 [2024-07-14 10:15:10.267407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.895 [2024-07-14 10:15:10.267448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.895 [2024-07-14 10:15:10.267477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.895 [2024-07-14 10:15:10.270925] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.895 [2024-07-14 10:15:10.270984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.895 [2024-07-14 10:15:10.271014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.895 [2024-07-14 10:15:10.273727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.895 [2024-07-14 10:15:10.273780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.895 [2024-07-14 10:15:10.273809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.895 [2024-07-14 10:15:10.276946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.895 [2024-07-14 10:15:10.277003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.895 [2024-07-14 10:15:10.277033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.895 [2024-07-14 10:15:10.280413] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.895 [2024-07-14 10:15:10.280469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.895 [2024-07-14 10:15:10.280498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.895 [2024-07-14 10:15:10.283566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.895 [2024-07-14 10:15:10.283621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.895 [2024-07-14 10:15:10.283650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.895 [2024-07-14 10:15:10.287722] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.895 [2024-07-14 10:15:10.287778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.895 [2024-07-14 10:15:10.287808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.895 [2024-07-14 10:15:10.291587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.895 [2024-07-14 10:15:10.291644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.895 [2024-07-14 10:15:10.291674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.895 [2024-07-14 10:15:10.294422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.896 [2024-07-14 10:15:10.294473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.896 [2024-07-14 10:15:10.294502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.896 [2024-07-14 10:15:10.298123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.896 [2024-07-14 10:15:10.298180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.896 [2024-07-14 10:15:10.298209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.896 [2024-07-14 10:15:10.301515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.896 [2024-07-14 10:15:10.301573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.896 [2024-07-14 10:15:10.301603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.896 [2024-07-14 10:15:10.304943] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.896 [2024-07-14 10:15:10.304998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.896 [2024-07-14 10:15:10.305028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.896 [2024-07-14 10:15:10.308655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.896 [2024-07-14 10:15:10.308712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.896 [2024-07-14 10:15:10.308741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.896 [2024-07-14 10:15:10.312300] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.896 [2024-07-14 10:15:10.312357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.896 [2024-07-14 10:15:10.312387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.896 [2024-07-14 10:15:10.315631] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.896 [2024-07-14 10:15:10.315688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.896 [2024-07-14 10:15:10.315717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.896 [2024-07-14 10:15:10.319050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.896 [2024-07-14 10:15:10.319109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.896 [2024-07-14 10:15:10.319144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.896 [2024-07-14 10:15:10.322417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.896 [2024-07-14 10:15:10.322470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.896 [2024-07-14 10:15:10.322499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.896 [2024-07-14 10:15:10.325553] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.896 [2024-07-14 10:15:10.325606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.896 [2024-07-14 10:15:10.325635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.896 [2024-07-14 10:15:10.329235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.896 [2024-07-14 10:15:10.329290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.896 [2024-07-14 10:15:10.329323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.896 [2024-07-14 10:15:10.332344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.896 [2024-07-14 10:15:10.332399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.896 [2024-07-14 10:15:10.332429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.896 [2024-07-14 10:15:10.336165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.896 [2024-07-14 10:15:10.336248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.896 [2024-07-14 10:15:10.336263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.896 [2024-07-14 10:15:10.340132] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.896 [2024-07-14 10:15:10.340190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.896 [2024-07-14 10:15:10.340230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.896 [2024-07-14 10:15:10.343931] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.896 [2024-07-14 10:15:10.343988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.896 [2024-07-14 10:15:10.344017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.896 [2024-07-14 10:15:10.347833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.896 [2024-07-14 10:15:10.347889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.896 [2024-07-14 10:15:10.347918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.896 [2024-07-14 10:15:10.351150] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.896 [2024-07-14 10:15:10.351258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.896 [2024-07-14 10:15:10.351272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.896 [2024-07-14 10:15:10.355071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.896 [2024-07-14 10:15:10.355144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.896 [2024-07-14 10:15:10.355157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.896 [2024-07-14 10:15:10.358910] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.896 [2024-07-14 10:15:10.358952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.896 [2024-07-14 10:15:10.358982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.896 [2024-07-14 10:15:10.363108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.896 [2024-07-14 10:15:10.363225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.896 [2024-07-14 10:15:10.363239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.896 [2024-07-14 10:15:10.367052] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.896 [2024-07-14 10:15:10.367110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.896 [2024-07-14 10:15:10.367155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.896 [2024-07-14 10:15:10.371079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.896 [2024-07-14 10:15:10.371142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.896 [2024-07-14 10:15:10.371171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.896 [2024-07-14 10:15:10.375329] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.896 [2024-07-14 10:15:10.375383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.896 [2024-07-14 10:15:10.375412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.896 [2024-07-14 10:15:10.379050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.896 [2024-07-14 10:15:10.379093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.896 [2024-07-14 10:15:10.379123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.896 [2024-07-14 10:15:10.383385] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.896 [2024-07-14 10:15:10.383441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.896 [2024-07-14 10:15:10.383469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.896 [2024-07-14 10:15:10.387466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.896 [2024-07-14 10:15:10.387522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.896 [2024-07-14 10:15:10.387551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.896 [2024-07-14 10:15:10.390865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.896 [2024-07-14 10:15:10.390921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.896 [2024-07-14 10:15:10.390951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.896 [2024-07-14 10:15:10.394732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.896 [2024-07-14 10:15:10.394784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.896 [2024-07-14 10:15:10.394838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.896 [2024-07-14 10:15:10.398404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.896 [2024-07-14 10:15:10.398458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.896 [2024-07-14 10:15:10.398487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.896 [2024-07-14 10:15:10.402180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.897 [2024-07-14 10:15:10.402248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.897 [2024-07-14 10:15:10.402278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.897 [2024-07-14 10:15:10.405732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.897 [2024-07-14 10:15:10.405788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.897 [2024-07-14 10:15:10.405817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.897 [2024-07-14 10:15:10.408997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.897 [2024-07-14 10:15:10.409050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.897 [2024-07-14 10:15:10.409079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.897 [2024-07-14 10:15:10.412961] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.897 [2024-07-14 10:15:10.413018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.897 [2024-07-14 10:15:10.413046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.897 [2024-07-14 10:15:10.416723] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.897 [2024-07-14 10:15:10.416782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.897 [2024-07-14 10:15:10.416811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:39.897 [2024-07-14 10:15:10.420668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.897 [2024-07-14 10:15:10.420724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.897 [2024-07-14 10:15:10.420753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:39.897 [2024-07-14 10:15:10.425289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.897 [2024-07-14 10:15:10.425346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.897 [2024-07-14 10:15:10.425376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:39.897 [2024-07-14 10:15:10.429186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.897 [2024-07-14 10:15:10.429250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.897 [2024-07-14 10:15:10.429281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:39.897 [2024-07-14 10:15:10.433088] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:39.897 [2024-07-14 10:15:10.433150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.897 [2024-07-14 10:15:10.433180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.171 [2024-07-14 10:15:10.437154] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.171 [2024-07-14 10:15:10.437254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.171 [2024-07-14 10:15:10.437270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.171 [2024-07-14 10:15:10.441354] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.171 [2024-07-14 10:15:10.441395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.171 [2024-07-14 10:15:10.441425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.171 [2024-07-14 10:15:10.445552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.171 [2024-07-14 10:15:10.445638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.171 [2024-07-14 10:15:10.445666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.171 [2024-07-14 10:15:10.449062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.171 [2024-07-14 10:15:10.449116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.171 [2024-07-14 10:15:10.449145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.171 [2024-07-14 10:15:10.452432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.171 [2024-07-14 10:15:10.452490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.171 [2024-07-14 10:15:10.452520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.171 [2024-07-14 10:15:10.456416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.171 [2024-07-14 10:15:10.456472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.171 [2024-07-14 10:15:10.456486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.171 [2024-07-14 10:15:10.459984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.171 [2024-07-14 10:15:10.460040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.171 [2024-07-14 10:15:10.460069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.171 [2024-07-14 10:15:10.463648] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.171 [2024-07-14 10:15:10.463703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.171 [2024-07-14 10:15:10.463731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.171 [2024-07-14 10:15:10.467048] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.171 [2024-07-14 10:15:10.467108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.171 [2024-07-14 10:15:10.467123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.171 [2024-07-14 10:15:10.470690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.171 [2024-07-14 10:15:10.470741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.171 [2024-07-14 10:15:10.470770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.171 [2024-07-14 10:15:10.473694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.171 [2024-07-14 10:15:10.473745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.171 [2024-07-14 10:15:10.473774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.171 [2024-07-14 10:15:10.477133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.171 [2024-07-14 10:15:10.477191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.171 [2024-07-14 10:15:10.477255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.171 [2024-07-14 10:15:10.480380] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.171 [2024-07-14 10:15:10.480438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.171 [2024-07-14 10:15:10.480466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.171 [2024-07-14 10:15:10.483847] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.171 [2024-07-14 10:15:10.483907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.171 [2024-07-14 10:15:10.483936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.171 [2024-07-14 10:15:10.487889] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.171 [2024-07-14 10:15:10.487949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.171 [2024-07-14 10:15:10.487978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.171 [2024-07-14 10:15:10.491906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.171 [2024-07-14 10:15:10.491965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.171 [2024-07-14 10:15:10.491994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.171 [2024-07-14 10:15:10.496086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.171 [2024-07-14 10:15:10.496146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.171 [2024-07-14 10:15:10.496175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.171 [2024-07-14 10:15:10.499717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.171 [2024-07-14 10:15:10.499776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.171 [2024-07-14 10:15:10.499805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.171 [2024-07-14 10:15:10.503567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.171 [2024-07-14 10:15:10.503641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.171 [2024-07-14 10:15:10.503670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.171 [2024-07-14 10:15:10.507477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.171 [2024-07-14 10:15:10.507536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.171 [2024-07-14 10:15:10.507570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.171 [2024-07-14 10:15:10.511714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.171 [2024-07-14 10:15:10.511769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.171 [2024-07-14 10:15:10.511798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.171 [2024-07-14 10:15:10.515439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.171 [2024-07-14 10:15:10.515497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.171 [2024-07-14 10:15:10.515526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.171 [2024-07-14 10:15:10.519355] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.171 [2024-07-14 10:15:10.519411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.171 [2024-07-14 10:15:10.519439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.171 [2024-07-14 10:15:10.522674] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.171 [2024-07-14 10:15:10.522725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.171 [2024-07-14 10:15:10.522754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.171 [2024-07-14 10:15:10.526062] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.171 [2024-07-14 10:15:10.526115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.172 [2024-07-14 10:15:10.526144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.172 [2024-07-14 10:15:10.530198] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.172 [2024-07-14 10:15:10.530267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.172 [2024-07-14 10:15:10.530297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.172 [2024-07-14 10:15:10.534265] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.172 [2024-07-14 10:15:10.534323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.172 [2024-07-14 10:15:10.534352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.172 [2024-07-14 10:15:10.537833] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.172 [2024-07-14 10:15:10.537891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.172 [2024-07-14 10:15:10.537921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.172 [2024-07-14 10:15:10.541510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.172 [2024-07-14 10:15:10.541569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.172 [2024-07-14 10:15:10.541599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.172 [2024-07-14 10:15:10.545027] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.172 [2024-07-14 10:15:10.545087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.172 [2024-07-14 10:15:10.545116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.172 [2024-07-14 10:15:10.548443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.172 [2024-07-14 10:15:10.548501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.172 [2024-07-14 10:15:10.548531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.172 [2024-07-14 10:15:10.552299] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.172 [2024-07-14 10:15:10.552358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.172 [2024-07-14 10:15:10.552388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.172 [2024-07-14 10:15:10.556144] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.172 [2024-07-14 10:15:10.556229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.172 [2024-07-14 10:15:10.556244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.172 [2024-07-14 10:15:10.559348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.172 [2024-07-14 10:15:10.559408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.172 [2024-07-14 10:15:10.559437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.172 [2024-07-14 10:15:10.563383] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.172 [2024-07-14 10:15:10.563441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.172 [2024-07-14 10:15:10.563470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.172 [2024-07-14 10:15:10.566955] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.172 [2024-07-14 10:15:10.566998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.172 [2024-07-14 10:15:10.567012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.172 [2024-07-14 10:15:10.570346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.172 [2024-07-14 10:15:10.570397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.172 [2024-07-14 10:15:10.570426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.172 [2024-07-14 10:15:10.573996] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.172 [2024-07-14 10:15:10.574056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.172 [2024-07-14 10:15:10.574085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.172 [2024-07-14 10:15:10.577116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.172 [2024-07-14 10:15:10.577176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.172 [2024-07-14 10:15:10.577205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.172 [2024-07-14 10:15:10.580655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.172 [2024-07-14 10:15:10.580714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.172 [2024-07-14 10:15:10.580743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.172 [2024-07-14 10:15:10.584233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.172 [2024-07-14 10:15:10.584290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.172 [2024-07-14 10:15:10.584318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.172 [2024-07-14 10:15:10.588183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.172 [2024-07-14 10:15:10.588252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.172 [2024-07-14 10:15:10.588281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.172 [2024-07-14 10:15:10.591716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.172 [2024-07-14 10:15:10.591773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.172 [2024-07-14 10:15:10.591802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.172 [2024-07-14 10:15:10.595493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.172 [2024-07-14 10:15:10.595554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.172 [2024-07-14 10:15:10.595583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.172 [2024-07-14 10:15:10.599115] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.172 [2024-07-14 10:15:10.599232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.172 [2024-07-14 10:15:10.599248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.172 [2024-07-14 10:15:10.602933] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.172 [2024-07-14 10:15:10.602994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.172 [2024-07-14 10:15:10.603023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.172 [2024-07-14 10:15:10.606028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.172 [2024-07-14 10:15:10.606080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.172 [2024-07-14 10:15:10.606110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.172 [2024-07-14 10:15:10.610063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.172 [2024-07-14 10:15:10.610122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.172 [2024-07-14 10:15:10.610152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.172 [2024-07-14 10:15:10.613856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.172 [2024-07-14 10:15:10.613915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.172 [2024-07-14 10:15:10.613944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.172 [2024-07-14 10:15:10.617316] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.172 [2024-07-14 10:15:10.617371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.172 [2024-07-14 10:15:10.617400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.172 [2024-07-14 10:15:10.620755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.172 [2024-07-14 10:15:10.620813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.172 [2024-07-14 10:15:10.620842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.172 [2024-07-14 10:15:10.624153] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.172 [2024-07-14 10:15:10.624236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.172 [2024-07-14 10:15:10.624250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.172 [2024-07-14 10:15:10.627458] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.172 [2024-07-14 10:15:10.627515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.172 [2024-07-14 10:15:10.627544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.172 [2024-07-14 10:15:10.630872] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.172 [2024-07-14 10:15:10.630928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.173 [2024-07-14 10:15:10.630942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.173 [2024-07-14 10:15:10.634256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.173 [2024-07-14 10:15:10.634307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.173 [2024-07-14 10:15:10.634336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.173 [2024-07-14 10:15:10.637468] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.173 [2024-07-14 10:15:10.637526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.173 [2024-07-14 10:15:10.637556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.173 [2024-07-14 10:15:10.641340] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.173 [2024-07-14 10:15:10.641399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.173 [2024-07-14 10:15:10.641428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.173 [2024-07-14 10:15:10.644799] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.173 [2024-07-14 10:15:10.644858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.173 [2024-07-14 10:15:10.644887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.173 [2024-07-14 10:15:10.648423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.173 [2024-07-14 10:15:10.648478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.173 [2024-07-14 10:15:10.648506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.173 [2024-07-14 10:15:10.651856] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.173 [2024-07-14 10:15:10.651914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.173 [2024-07-14 10:15:10.651943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.173 [2024-07-14 10:15:10.655351] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.173 [2024-07-14 10:15:10.655407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.173 [2024-07-14 10:15:10.655436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.173 [2024-07-14 10:15:10.658585] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.173 [2024-07-14 10:15:10.658637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.173 [2024-07-14 10:15:10.658667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.173 [2024-07-14 10:15:10.661817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.173 [2024-07-14 10:15:10.661869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.173 [2024-07-14 10:15:10.661897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.173 [2024-07-14 10:15:10.665122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.173 [2024-07-14 10:15:10.665176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.173 [2024-07-14 10:15:10.665205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.173 [2024-07-14 10:15:10.669482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.173 [2024-07-14 10:15:10.669538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.173 [2024-07-14 10:15:10.669568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.173 [2024-07-14 10:15:10.672878] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.173 [2024-07-14 10:15:10.672931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.173 [2024-07-14 10:15:10.672960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.173 [2024-07-14 10:15:10.676381] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.173 [2024-07-14 10:15:10.676439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.173 [2024-07-14 10:15:10.676468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.173 [2024-07-14 10:15:10.680406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.173 [2024-07-14 10:15:10.680465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.173 [2024-07-14 10:15:10.680493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.173 [2024-07-14 10:15:10.683802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.173 [2024-07-14 10:15:10.683862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.173 [2024-07-14 10:15:10.683891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.173 [2024-07-14 10:15:10.686989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.173 [2024-07-14 10:15:10.687047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.173 [2024-07-14 10:15:10.687076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.173 [2024-07-14 10:15:10.690733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.173 [2024-07-14 10:15:10.690784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.173 [2024-07-14 10:15:10.690839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.173 [2024-07-14 10:15:10.694181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.173 [2024-07-14 10:15:10.694245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.173 [2024-07-14 10:15:10.694274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.173 [2024-07-14 10:15:10.697181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.173 [2024-07-14 10:15:10.697257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.173 [2024-07-14 10:15:10.697270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.173 [2024-07-14 10:15:10.701000] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.173 [2024-07-14 10:15:10.701056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.173 [2024-07-14 10:15:10.701085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.173 [2024-07-14 10:15:10.704538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.173 [2024-07-14 10:15:10.704597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.173 [2024-07-14 10:15:10.704626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.173 [2024-07-14 10:15:10.708251] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.173 [2024-07-14 10:15:10.708311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.173 [2024-07-14 10:15:10.708340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.173 [2024-07-14 10:15:10.711671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.173 [2024-07-14 10:15:10.711730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.173 [2024-07-14 10:15:10.711759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.173 [2024-07-14 10:15:10.715390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.173 [2024-07-14 10:15:10.715448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.173 [2024-07-14 10:15:10.715477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.434 [2024-07-14 10:15:10.719058] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.434 [2024-07-14 10:15:10.719101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.434 [2024-07-14 10:15:10.719130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.435 [2024-07-14 10:15:10.722420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.435 [2024-07-14 10:15:10.722472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.435 [2024-07-14 10:15:10.722501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.435 [2024-07-14 10:15:10.726330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.435 [2024-07-14 10:15:10.726390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.435 [2024-07-14 10:15:10.726418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.435 [2024-07-14 10:15:10.730076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.435 [2024-07-14 10:15:10.730133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.435 [2024-07-14 10:15:10.730162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.435 [2024-07-14 10:15:10.733474] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.435 [2024-07-14 10:15:10.733532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.435 [2024-07-14 10:15:10.733561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.435 [2024-07-14 10:15:10.737045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.435 [2024-07-14 10:15:10.737104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.435 [2024-07-14 10:15:10.737132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.435 [2024-07-14 10:15:10.740464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.435 [2024-07-14 10:15:10.740522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.435 [2024-07-14 10:15:10.740550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.435 [2024-07-14 10:15:10.743532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.435 [2024-07-14 10:15:10.743589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.435 [2024-07-14 10:15:10.743618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.435 [2024-07-14 10:15:10.747026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.435 [2024-07-14 10:15:10.747074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.435 [2024-07-14 10:15:10.747102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.435 [2024-07-14 10:15:10.750601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.435 [2024-07-14 10:15:10.750654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.435 [2024-07-14 10:15:10.750682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.435 [2024-07-14 10:15:10.754031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.435 [2024-07-14 10:15:10.754083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.435 [2024-07-14 10:15:10.754111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.435 [2024-07-14 10:15:10.757694] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.435 [2024-07-14 10:15:10.757750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.435 [2024-07-14 10:15:10.757779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.435 [2024-07-14 10:15:10.761157] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.435 [2024-07-14 10:15:10.761256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.435 [2024-07-14 10:15:10.761271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.435 [2024-07-14 10:15:10.764502] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.435 [2024-07-14 10:15:10.764560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.435 [2024-07-14 10:15:10.764588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.435 [2024-07-14 10:15:10.767601] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.435 [2024-07-14 10:15:10.767660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.435 [2024-07-14 10:15:10.767690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.435 [2024-07-14 10:15:10.771054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.435 [2024-07-14 10:15:10.771096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.435 [2024-07-14 10:15:10.771139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.435 [2024-07-14 10:15:10.774569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.435 [2024-07-14 10:15:10.774621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.435 [2024-07-14 10:15:10.774649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.435 [2024-07-14 10:15:10.778168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.435 [2024-07-14 10:15:10.778245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.435 [2024-07-14 10:15:10.778259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.435 [2024-07-14 10:15:10.781711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.435 [2024-07-14 10:15:10.781770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.435 [2024-07-14 10:15:10.781799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.435 [2024-07-14 10:15:10.785411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.435 [2024-07-14 10:15:10.785469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.435 [2024-07-14 10:15:10.785498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.435 [2024-07-14 10:15:10.788696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.435 [2024-07-14 10:15:10.788756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.435 [2024-07-14 10:15:10.788785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.435 [2024-07-14 10:15:10.791930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.435 [2024-07-14 10:15:10.791988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.435 [2024-07-14 10:15:10.792017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.435 [2024-07-14 10:15:10.795937] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.435 [2024-07-14 10:15:10.795996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.435 [2024-07-14 10:15:10.796025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.435 [2024-07-14 10:15:10.799505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.435 [2024-07-14 10:15:10.799561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.435 [2024-07-14 10:15:10.799590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.435 [2024-07-14 10:15:10.803083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.435 [2024-07-14 10:15:10.803126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.435 [2024-07-14 10:15:10.803170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.435 [2024-07-14 10:15:10.806881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.435 [2024-07-14 10:15:10.806934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.435 [2024-07-14 10:15:10.806962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.435 [2024-07-14 10:15:10.810461] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.435 [2024-07-14 10:15:10.810514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.435 [2024-07-14 10:15:10.810542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.435 [2024-07-14 10:15:10.814294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.435 [2024-07-14 10:15:10.814348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.435 [2024-07-14 10:15:10.814376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.435 [2024-07-14 10:15:10.817565] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.435 [2024-07-14 10:15:10.817637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.435 [2024-07-14 10:15:10.817667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.435 [2024-07-14 10:15:10.821159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.435 [2024-07-14 10:15:10.821257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.436 [2024-07-14 10:15:10.821272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.436 [2024-07-14 10:15:10.825288] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.436 [2024-07-14 10:15:10.825343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.436 [2024-07-14 10:15:10.825372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.436 [2024-07-14 10:15:10.828274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.436 [2024-07-14 10:15:10.828331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.436 [2024-07-14 10:15:10.828360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.436 [2024-07-14 10:15:10.832337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.436 [2024-07-14 10:15:10.832392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.436 [2024-07-14 10:15:10.832420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.436 [2024-07-14 10:15:10.835548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.436 [2024-07-14 10:15:10.835606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.436 [2024-07-14 10:15:10.835634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.436 [2024-07-14 10:15:10.839328] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.436 [2024-07-14 10:15:10.839385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.436 [2024-07-14 10:15:10.839414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.436 [2024-07-14 10:15:10.842597] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.436 [2024-07-14 10:15:10.842650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.436 [2024-07-14 10:15:10.842678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.436 [2024-07-14 10:15:10.846040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.436 [2024-07-14 10:15:10.846092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.436 [2024-07-14 10:15:10.846120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.436 [2024-07-14 10:15:10.849591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.436 [2024-07-14 10:15:10.849667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.436 [2024-07-14 10:15:10.849696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.436 [2024-07-14 10:15:10.853264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.436 [2024-07-14 10:15:10.853322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.436 [2024-07-14 10:15:10.853350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.436 [2024-07-14 10:15:10.856667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.436 [2024-07-14 10:15:10.856726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.436 [2024-07-14 10:15:10.856754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.436 [2024-07-14 10:15:10.860319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.436 [2024-07-14 10:15:10.860378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.436 [2024-07-14 10:15:10.860407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.436 [2024-07-14 10:15:10.863605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.436 [2024-07-14 10:15:10.863679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.436 [2024-07-14 10:15:10.863707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.436 [2024-07-14 10:15:10.867227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.436 [2024-07-14 10:15:10.867291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.436 [2024-07-14 10:15:10.867320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.436 [2024-07-14 10:15:10.871192] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.436 [2024-07-14 10:15:10.871260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.436 [2024-07-14 10:15:10.871288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.436 [2024-07-14 10:15:10.874540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.436 [2024-07-14 10:15:10.874591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.436 [2024-07-14 10:15:10.874619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.436 [2024-07-14 10:15:10.878170] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.436 [2024-07-14 10:15:10.878248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.436 [2024-07-14 10:15:10.878262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.436 [2024-07-14 10:15:10.881659] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.436 [2024-07-14 10:15:10.881714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.436 [2024-07-14 10:15:10.881743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.436 [2024-07-14 10:15:10.885084] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.436 [2024-07-14 10:15:10.885144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.436 [2024-07-14 10:15:10.885173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.436 [2024-07-14 10:15:10.888885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.436 [2024-07-14 10:15:10.888945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.436 [2024-07-14 10:15:10.888975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.436 [2024-07-14 10:15:10.892682] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.436 [2024-07-14 10:15:10.892740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.436 [2024-07-14 10:15:10.892768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.436 [2024-07-14 10:15:10.896330] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.436 [2024-07-14 10:15:10.896387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.436 [2024-07-14 10:15:10.896416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.436 [2024-07-14 10:15:10.899814] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.436 [2024-07-14 10:15:10.899872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.436 [2024-07-14 10:15:10.899901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.436 [2024-07-14 10:15:10.903711] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.436 [2024-07-14 10:15:10.903769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.436 [2024-07-14 10:15:10.903797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.436 [2024-07-14 10:15:10.906991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.436 [2024-07-14 10:15:10.907049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.436 [2024-07-14 10:15:10.907078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.436 [2024-07-14 10:15:10.910521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.436 [2024-07-14 10:15:10.910573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.436 [2024-07-14 10:15:10.910600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.436 [2024-07-14 10:15:10.914083] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.436 [2024-07-14 10:15:10.914135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.436 [2024-07-14 10:15:10.914163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.436 [2024-07-14 10:15:10.917552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.436 [2024-07-14 10:15:10.917623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.436 [2024-07-14 10:15:10.917651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.436 [2024-07-14 10:15:10.921374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.436 [2024-07-14 10:15:10.921430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.436 [2024-07-14 10:15:10.921459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.436 [2024-07-14 10:15:10.924971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.436 [2024-07-14 10:15:10.925027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.436 [2024-07-14 10:15:10.925055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.437 [2024-07-14 10:15:10.927563] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.437 [2024-07-14 10:15:10.927602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.437 [2024-07-14 10:15:10.927630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.437 [2024-07-14 10:15:10.930744] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.437 [2024-07-14 10:15:10.930796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.437 [2024-07-14 10:15:10.930849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.437 [2024-07-14 10:15:10.934545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.437 [2024-07-14 10:15:10.934597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.437 [2024-07-14 10:15:10.934626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.437 [2024-07-14 10:15:10.938342] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.437 [2024-07-14 10:15:10.938393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.437 [2024-07-14 10:15:10.938422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.437 [2024-07-14 10:15:10.941787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.437 [2024-07-14 10:15:10.941843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.437 [2024-07-14 10:15:10.941872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.437 [2024-07-14 10:15:10.945472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.437 [2024-07-14 10:15:10.945527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.437 [2024-07-14 10:15:10.945555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.437 [2024-07-14 10:15:10.949649] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.437 [2024-07-14 10:15:10.949707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.437 [2024-07-14 10:15:10.949736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.437 [2024-07-14 10:15:10.953221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.437 [2024-07-14 10:15:10.953278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.437 [2024-07-14 10:15:10.953306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.437 [2024-07-14 10:15:10.956778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.437 [2024-07-14 10:15:10.956838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.437 [2024-07-14 10:15:10.956866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.437 [2024-07-14 10:15:10.959668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.437 [2024-07-14 10:15:10.959727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.437 [2024-07-14 10:15:10.959756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.437 [2024-07-14 10:15:10.962972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.437 [2024-07-14 10:15:10.963032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.437 [2024-07-14 10:15:10.963062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.437 [2024-07-14 10:15:10.966360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.437 [2024-07-14 10:15:10.966413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.437 [2024-07-14 10:15:10.966442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.437 [2024-07-14 10:15:10.969772] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.437 [2024-07-14 10:15:10.969829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.437 [2024-07-14 10:15:10.969857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.437 [2024-07-14 10:15:10.973366] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.437 [2024-07-14 10:15:10.973424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.437 [2024-07-14 10:15:10.973452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.437 [2024-07-14 10:15:10.977402] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.437 [2024-07-14 10:15:10.977460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.437 [2024-07-14 10:15:10.977489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.696 [2024-07-14 10:15:10.981543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.696 [2024-07-14 10:15:10.981602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.696 [2024-07-14 10:15:10.981631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.696 [2024-07-14 10:15:10.985885] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.696 [2024-07-14 10:15:10.985944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.696 [2024-07-14 10:15:10.985973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.696 [2024-07-14 10:15:10.989645] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.696 [2024-07-14 10:15:10.989703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.696 [2024-07-14 10:15:10.989732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.696 [2024-07-14 10:15:10.993712] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.696 [2024-07-14 10:15:10.993771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.696 [2024-07-14 10:15:10.993800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.696 [2024-07-14 10:15:10.997532] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.696 [2024-07-14 10:15:10.997592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.696 [2024-07-14 10:15:10.997621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.696 [2024-07-14 10:15:11.001122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.696 [2024-07-14 10:15:11.001181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.696 [2024-07-14 10:15:11.001210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.696 [2024-07-14 10:15:11.005119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.696 [2024-07-14 10:15:11.005179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.696 [2024-07-14 10:15:11.005208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.696 [2024-07-14 10:15:11.009100] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.696 [2024-07-14 10:15:11.009158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.697 [2024-07-14 10:15:11.009186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.697 [2024-07-14 10:15:11.012982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.697 [2024-07-14 10:15:11.013038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.697 [2024-07-14 10:15:11.013066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.697 [2024-07-14 10:15:11.016759] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.697 [2024-07-14 10:15:11.016813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.697 [2024-07-14 10:15:11.016841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.697 [2024-07-14 10:15:11.020537] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.697 [2024-07-14 10:15:11.020594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.697 [2024-07-14 10:15:11.020623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.697 [2024-07-14 10:15:11.024595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.697 [2024-07-14 10:15:11.024651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.697 [2024-07-14 10:15:11.024679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.697 [2024-07-14 10:15:11.028323] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.697 [2024-07-14 10:15:11.028379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.697 [2024-07-14 10:15:11.028407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.697 [2024-07-14 10:15:11.032360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.697 [2024-07-14 10:15:11.032414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.697 [2024-07-14 10:15:11.032443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.697 [2024-07-14 10:15:11.036136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.697 [2024-07-14 10:15:11.036191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.697 [2024-07-14 10:15:11.036231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.697 [2024-07-14 10:15:11.040116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.697 [2024-07-14 10:15:11.040172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.697 [2024-07-14 10:15:11.040200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.697 [2024-07-14 10:15:11.043622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.697 [2024-07-14 10:15:11.043693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.697 [2024-07-14 10:15:11.043721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.697 [2024-07-14 10:15:11.047016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.697 [2024-07-14 10:15:11.047073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.697 [2024-07-14 10:15:11.047101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.697 [2024-07-14 10:15:11.050666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.697 [2024-07-14 10:15:11.050717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.697 [2024-07-14 10:15:11.050745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.697 [2024-07-14 10:15:11.053750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.697 [2024-07-14 10:15:11.053800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.697 [2024-07-14 10:15:11.053828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.697 [2024-07-14 10:15:11.057193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.697 [2024-07-14 10:15:11.057257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.697 [2024-07-14 10:15:11.057285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.697 [2024-07-14 10:15:11.060802] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.697 [2024-07-14 10:15:11.060859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.697 [2024-07-14 10:15:11.060886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.697 [2024-07-14 10:15:11.064314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.697 [2024-07-14 10:15:11.064368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.697 [2024-07-14 10:15:11.064396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.697 [2024-07-14 10:15:11.067906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.697 [2024-07-14 10:15:11.067960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.697 [2024-07-14 10:15:11.067988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.697 [2024-07-14 10:15:11.070979] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.697 [2024-07-14 10:15:11.071019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.697 [2024-07-14 10:15:11.071048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.697 [2024-07-14 10:15:11.074467] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.697 [2024-07-14 10:15:11.074518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.697 [2024-07-14 10:15:11.074546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.697 [2024-07-14 10:15:11.077700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.697 [2024-07-14 10:15:11.077750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.697 [2024-07-14 10:15:11.077778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.697 [2024-07-14 10:15:11.081332] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.697 [2024-07-14 10:15:11.081389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.697 [2024-07-14 10:15:11.081417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.697 [2024-07-14 10:15:11.084539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.697 [2024-07-14 10:15:11.084594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.697 [2024-07-14 10:15:11.084622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.697 [2024-07-14 10:15:11.087956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.697 [2024-07-14 10:15:11.088010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.697 [2024-07-14 10:15:11.088038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.697 [2024-07-14 10:15:11.091508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.697 [2024-07-14 10:15:11.091562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.697 [2024-07-14 10:15:11.091590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.697 [2024-07-14 10:15:11.094685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.697 [2024-07-14 10:15:11.094737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.697 [2024-07-14 10:15:11.094765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.697 [2024-07-14 10:15:11.097741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.697 [2024-07-14 10:15:11.097791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.697 [2024-07-14 10:15:11.097820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.697 [2024-07-14 10:15:11.101384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.697 [2024-07-14 10:15:11.101437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.697 [2024-07-14 10:15:11.101465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.697 [2024-07-14 10:15:11.105269] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.697 [2024-07-14 10:15:11.105322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.697 [2024-07-14 10:15:11.105351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.697 [2024-07-14 10:15:11.108719] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.697 [2024-07-14 10:15:11.108775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.697 [2024-07-14 10:15:11.108803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.697 [2024-07-14 10:15:11.111638] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.697 [2024-07-14 10:15:11.111693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.698 [2024-07-14 10:15:11.111721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.698 [2024-07-14 10:15:11.115344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.698 [2024-07-14 10:15:11.115399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.698 [2024-07-14 10:15:11.115427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.698 [2024-07-14 10:15:11.118555] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.698 [2024-07-14 10:15:11.118621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.698 [2024-07-14 10:15:11.118649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.698 [2024-07-14 10:15:11.121995] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.698 [2024-07-14 10:15:11.122044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.698 [2024-07-14 10:15:11.122072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.698 [2024-07-14 10:15:11.125806] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.698 [2024-07-14 10:15:11.125862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.698 [2024-07-14 10:15:11.125891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.698 [2024-07-14 10:15:11.129591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.698 [2024-07-14 10:15:11.129647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.698 [2024-07-14 10:15:11.129676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.698 [2024-07-14 10:15:11.133199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.698 [2024-07-14 10:15:11.133264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.698 [2024-07-14 10:15:11.133293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.698 [2024-07-14 10:15:11.136211] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.698 [2024-07-14 10:15:11.136278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.698 [2024-07-14 10:15:11.136308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.698 [2024-07-14 10:15:11.139879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.698 [2024-07-14 10:15:11.139934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.698 [2024-07-14 10:15:11.139962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.698 [2024-07-14 10:15:11.142714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.698 [2024-07-14 10:15:11.142764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.698 [2024-07-14 10:15:11.142792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.698 [2024-07-14 10:15:11.146053] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.698 [2024-07-14 10:15:11.146104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.698 [2024-07-14 10:15:11.146132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.698 [2024-07-14 10:15:11.149485] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.698 [2024-07-14 10:15:11.149538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.698 [2024-07-14 10:15:11.149566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.698 [2024-07-14 10:15:11.152751] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.698 [2024-07-14 10:15:11.152805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.698 [2024-07-14 10:15:11.152832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.698 [2024-07-14 10:15:11.156837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.698 [2024-07-14 10:15:11.156892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.698 [2024-07-14 10:15:11.156919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.698 [2024-07-14 10:15:11.160420] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.698 [2024-07-14 10:15:11.160474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.698 [2024-07-14 10:15:11.160503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.698 [2024-07-14 10:15:11.164140] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.698 [2024-07-14 10:15:11.164195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.698 [2024-07-14 10:15:11.164236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.698 [2024-07-14 10:15:11.167409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.698 [2024-07-14 10:15:11.167462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.698 [2024-07-14 10:15:11.167490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.698 [2024-07-14 10:15:11.170590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.698 [2024-07-14 10:15:11.170642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.698 [2024-07-14 10:15:11.170670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.698 [2024-07-14 10:15:11.174055] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.698 [2024-07-14 10:15:11.174105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.698 [2024-07-14 10:15:11.174133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.698 [2024-07-14 10:15:11.177364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.698 [2024-07-14 10:15:11.177419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.698 [2024-07-14 10:15:11.177448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.698 [2024-07-14 10:15:11.180946] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.698 [2024-07-14 10:15:11.181000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.698 [2024-07-14 10:15:11.181029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.698 [2024-07-14 10:15:11.185057] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.698 [2024-07-14 10:15:11.185112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.698 [2024-07-14 10:15:11.185140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.698 [2024-07-14 10:15:11.188024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.698 [2024-07-14 10:15:11.188079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.698 [2024-07-14 10:15:11.188107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.698 [2024-07-14 10:15:11.191454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.698 [2024-07-14 10:15:11.191508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.698 [2024-07-14 10:15:11.191536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.698 [2024-07-14 10:15:11.194747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.698 [2024-07-14 10:15:11.194798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.698 [2024-07-14 10:15:11.194878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.698 [2024-07-14 10:15:11.198337] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.698 [2024-07-14 10:15:11.198388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.698 [2024-07-14 10:15:11.198416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.698 [2024-07-14 10:15:11.201289] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.698 [2024-07-14 10:15:11.201343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.698 [2024-07-14 10:15:11.201370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.698 [2024-07-14 10:15:11.205301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.698 [2024-07-14 10:15:11.205354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.698 [2024-07-14 10:15:11.205382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.698 [2024-07-14 10:15:11.208130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.698 [2024-07-14 10:15:11.208227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.698 [2024-07-14 10:15:11.208241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.698 [2024-07-14 10:15:11.211528] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.698 [2024-07-14 10:15:11.211566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.699 [2024-07-14 10:15:11.211578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.699 [2024-07-14 10:15:11.215352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.699 [2024-07-14 10:15:11.215406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.699 [2024-07-14 10:15:11.215434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.699 [2024-07-14 10:15:11.218832] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.699 [2024-07-14 10:15:11.218888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.699 [2024-07-14 10:15:11.218917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.699 [2024-07-14 10:15:11.222613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.699 [2024-07-14 10:15:11.222663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.699 [2024-07-14 10:15:11.222691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.699 [2024-07-14 10:15:11.226270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.699 [2024-07-14 10:15:11.226321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.699 [2024-07-14 10:15:11.226350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.699 [2024-07-14 10:15:11.230026] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.699 [2024-07-14 10:15:11.230080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.699 [2024-07-14 10:15:11.230108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.699 [2024-07-14 10:15:11.233705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.699 [2024-07-14 10:15:11.233760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.699 [2024-07-14 10:15:11.233789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.699 [2024-07-14 10:15:11.237414] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.699 [2024-07-14 10:15:11.237468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.699 [2024-07-14 10:15:11.237495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.699 [2024-07-14 10:15:11.241221] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.699 [2024-07-14 10:15:11.241296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.699 [2024-07-14 10:15:11.241308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.957 [2024-07-14 10:15:11.244919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.957 [2024-07-14 10:15:11.244973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.957 [2024-07-14 10:15:11.245002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.957 [2024-07-14 10:15:11.248574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.957 [2024-07-14 10:15:11.248629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.957 [2024-07-14 10:15:11.248656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.957 [2024-07-14 10:15:11.252390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.957 [2024-07-14 10:15:11.252445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.957 [2024-07-14 10:15:11.252473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.957 [2024-07-14 10:15:11.255952] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.957 [2024-07-14 10:15:11.256007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.957 [2024-07-14 10:15:11.256035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.957 [2024-07-14 10:15:11.258985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.957 [2024-07-14 10:15:11.259042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.957 [2024-07-14 10:15:11.259071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.957 [2024-07-14 10:15:11.262343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.957 [2024-07-14 10:15:11.262393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.957 [2024-07-14 10:15:11.262421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.957 [2024-07-14 10:15:11.266035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.957 [2024-07-14 10:15:11.266091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.957 [2024-07-14 10:15:11.266119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.957 [2024-07-14 10:15:11.269500] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.957 [2024-07-14 10:15:11.269554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.957 [2024-07-14 10:15:11.269582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.957 [2024-07-14 10:15:11.272377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.957 [2024-07-14 10:15:11.272430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.958 [2024-07-14 10:15:11.272458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.958 [2024-07-14 10:15:11.275442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.958 [2024-07-14 10:15:11.275498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.958 [2024-07-14 10:15:11.275526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.958 [2024-07-14 10:15:11.278590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.958 [2024-07-14 10:15:11.278641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.958 [2024-07-14 10:15:11.278670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.958 [2024-07-14 10:15:11.282282] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.958 [2024-07-14 10:15:11.282332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.958 [2024-07-14 10:15:11.282361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.958 [2024-07-14 10:15:11.285560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.958 [2024-07-14 10:15:11.285613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.958 [2024-07-14 10:15:11.285641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.958 [2024-07-14 10:15:11.289025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.958 [2024-07-14 10:15:11.289079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.958 [2024-07-14 10:15:11.289108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.958 [2024-07-14 10:15:11.292552] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.958 [2024-07-14 10:15:11.292607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.958 [2024-07-14 10:15:11.292635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.958 [2024-07-14 10:15:11.296588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.958 [2024-07-14 10:15:11.296644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.958 [2024-07-14 10:15:11.296673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.958 [2024-07-14 10:15:11.300531] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.958 [2024-07-14 10:15:11.300585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.958 [2024-07-14 10:15:11.300613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.958 [2024-07-14 10:15:11.304352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.958 [2024-07-14 10:15:11.304406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.958 [2024-07-14 10:15:11.304434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.958 [2024-07-14 10:15:11.308079] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.958 [2024-07-14 10:15:11.308135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.958 [2024-07-14 10:15:11.308163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.958 [2024-07-14 10:15:11.312256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.958 [2024-07-14 10:15:11.312309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.958 [2024-07-14 10:15:11.312337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.958 [2024-07-14 10:15:11.315871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.958 [2024-07-14 10:15:11.315927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.958 [2024-07-14 10:15:11.315955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.958 [2024-07-14 10:15:11.319911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.958 [2024-07-14 10:15:11.319967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.958 [2024-07-14 10:15:11.319995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.958 [2024-07-14 10:15:11.323884] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.958 [2024-07-14 10:15:11.323939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.958 [2024-07-14 10:15:11.323968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.958 [2024-07-14 10:15:11.327478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.958 [2024-07-14 10:15:11.327533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.958 [2024-07-14 10:15:11.327561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.958 [2024-07-14 10:15:11.330986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.958 [2024-07-14 10:15:11.331044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.958 [2024-07-14 10:15:11.331073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.958 [2024-07-14 10:15:11.335415] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.958 [2024-07-14 10:15:11.335472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.958 [2024-07-14 10:15:11.335501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.958 [2024-07-14 10:15:11.339314] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.958 [2024-07-14 10:15:11.339367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.958 [2024-07-14 10:15:11.339395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.958 [2024-07-14 10:15:11.343207] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.958 [2024-07-14 10:15:11.343266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.958 [2024-07-14 10:15:11.343296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.958 [2024-07-14 10:15:11.347036] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.958 [2024-07-14 10:15:11.347100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.958 [2024-07-14 10:15:11.347141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.958 [2024-07-14 10:15:11.350991] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.958 [2024-07-14 10:15:11.351033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.958 [2024-07-14 10:15:11.351063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.958 [2024-07-14 10:15:11.354880] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.958 [2024-07-14 10:15:11.354922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.958 [2024-07-14 10:15:11.354951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.958 [2024-07-14 10:15:11.358781] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.958 [2024-07-14 10:15:11.358858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.958 [2024-07-14 10:15:11.358888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.958 [2024-07-14 10:15:11.362291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.958 [2024-07-14 10:15:11.362343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.958 [2024-07-14 10:15:11.362373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.958 [2024-07-14 10:15:11.365713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.958 [2024-07-14 10:15:11.365767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.958 [2024-07-14 10:15:11.365796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.958 [2024-07-14 10:15:11.370038] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.958 [2024-07-14 10:15:11.370093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.958 [2024-07-14 10:15:11.370121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.958 [2024-07-14 10:15:11.373958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.958 [2024-07-14 10:15:11.374012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.958 [2024-07-14 10:15:11.374041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.958 [2024-07-14 10:15:11.378123] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.958 [2024-07-14 10:15:11.378178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.958 [2024-07-14 10:15:11.378222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.958 [2024-07-14 10:15:11.381917] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.959 [2024-07-14 10:15:11.381971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.959 [2024-07-14 10:15:11.382000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.959 [2024-07-14 10:15:11.385654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.959 [2024-07-14 10:15:11.385708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.959 [2024-07-14 10:15:11.385736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.959 [2024-07-14 10:15:11.389522] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.959 [2024-07-14 10:15:11.389577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.959 [2024-07-14 10:15:11.389620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.959 [2024-07-14 10:15:11.393571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.959 [2024-07-14 10:15:11.393626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.959 [2024-07-14 10:15:11.393655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.959 [2024-07-14 10:15:11.397042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.959 [2024-07-14 10:15:11.397095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.959 [2024-07-14 10:15:11.397124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.959 [2024-07-14 10:15:11.400404] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.959 [2024-07-14 10:15:11.400442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.959 [2024-07-14 10:15:11.400471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.959 [2024-07-14 10:15:11.404148] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.959 [2024-07-14 10:15:11.404228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.959 [2024-07-14 10:15:11.404242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.959 [2024-07-14 10:15:11.407756] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.959 [2024-07-14 10:15:11.407812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.959 [2024-07-14 10:15:11.407841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.959 [2024-07-14 10:15:11.410993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.959 [2024-07-14 10:15:11.411029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.959 [2024-07-14 10:15:11.411042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.959 [2024-07-14 10:15:11.414901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.959 [2024-07-14 10:15:11.414942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.959 [2024-07-14 10:15:11.414971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.959 [2024-07-14 10:15:11.418260] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.959 [2024-07-14 10:15:11.418310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.959 [2024-07-14 10:15:11.418338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.959 [2024-07-14 10:15:11.421911] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.959 [2024-07-14 10:15:11.421967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.959 [2024-07-14 10:15:11.421996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.959 [2024-07-14 10:15:11.425922] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.959 [2024-07-14 10:15:11.425977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.959 [2024-07-14 10:15:11.426005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.959 [2024-07-14 10:15:11.429918] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.959 [2024-07-14 10:15:11.429961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.959 [2024-07-14 10:15:11.429992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.959 [2024-07-14 10:15:11.433636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.959 [2024-07-14 10:15:11.433690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.959 [2024-07-14 10:15:11.433719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.959 [2024-07-14 10:15:11.437358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.959 [2024-07-14 10:15:11.437399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.959 [2024-07-14 10:15:11.437428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.959 [2024-07-14 10:15:11.441668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.959 [2024-07-14 10:15:11.441725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.959 [2024-07-14 10:15:11.441753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.959 [2024-07-14 10:15:11.446105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.959 [2024-07-14 10:15:11.446161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.959 [2024-07-14 10:15:11.446190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.959 [2024-07-14 10:15:11.450371] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.959 [2024-07-14 10:15:11.450429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.959 [2024-07-14 10:15:11.450459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.959 [2024-07-14 10:15:11.454505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.959 [2024-07-14 10:15:11.454544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.959 [2024-07-14 10:15:11.454558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.959 [2024-07-14 10:15:11.458333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.959 [2024-07-14 10:15:11.458374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.959 [2024-07-14 10:15:11.458388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.959 [2024-07-14 10:15:11.462374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.959 [2024-07-14 10:15:11.462432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.959 [2024-07-14 10:15:11.462445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.959 [2024-07-14 10:15:11.466020] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.959 [2024-07-14 10:15:11.466074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.959 [2024-07-14 10:15:11.466103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.959 [2024-07-14 10:15:11.469855] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.959 [2024-07-14 10:15:11.469911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.959 [2024-07-14 10:15:11.469939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.959 [2024-07-14 10:15:11.473422] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.959 [2024-07-14 10:15:11.473462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.959 [2024-07-14 10:15:11.473492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.959 [2024-07-14 10:15:11.476835] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.959 [2024-07-14 10:15:11.476892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.959 [2024-07-14 10:15:11.476921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.959 [2024-07-14 10:15:11.480574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.959 [2024-07-14 10:15:11.480630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.959 [2024-07-14 10:15:11.480659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.959 [2024-07-14 10:15:11.483679] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.959 [2024-07-14 10:15:11.483733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.959 [2024-07-14 10:15:11.483761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.959 [2024-07-14 10:15:11.487416] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.959 [2024-07-14 10:15:11.487472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.959 [2024-07-14 10:15:11.487500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:40.959 [2024-07-14 10:15:11.491104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.960 [2024-07-14 10:15:11.491176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.960 [2024-07-14 10:15:11.491204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:40.960 [2024-07-14 10:15:11.494667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.960 [2024-07-14 10:15:11.494719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.960 [2024-07-14 10:15:11.494748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:40.960 [2024-07-14 10:15:11.498432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.960 [2024-07-14 10:15:11.498469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.960 [2024-07-14 10:15:11.498498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:40.960 [2024-07-14 10:15:11.502566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:40.960 [2024-07-14 10:15:11.502606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:40.960 [2024-07-14 10:15:11.502636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.218 [2024-07-14 10:15:11.506470] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.218 [2024-07-14 10:15:11.506522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.218 [2024-07-14 10:15:11.506551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.218 [2024-07-14 10:15:11.510640] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.218 [2024-07-14 10:15:11.510692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.218 [2024-07-14 10:15:11.510721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.219 [2024-07-14 10:15:11.514487] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.219 [2024-07-14 10:15:11.514541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.219 [2024-07-14 10:15:11.514570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.219 [2024-07-14 10:15:11.517843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.219 [2024-07-14 10:15:11.517896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.219 [2024-07-14 10:15:11.517925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.219 [2024-07-14 10:15:11.522025] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.219 [2024-07-14 10:15:11.522078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.219 [2024-07-14 10:15:11.522107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.219 [2024-07-14 10:15:11.525520] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.219 [2024-07-14 10:15:11.525571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.219 [2024-07-14 10:15:11.525599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.219 [2024-07-14 10:15:11.529082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.219 [2024-07-14 10:15:11.529137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.219 [2024-07-14 10:15:11.529165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.219 [2024-07-14 10:15:11.532935] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.219 [2024-07-14 10:15:11.532990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.219 [2024-07-14 10:15:11.533019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.219 [2024-07-14 10:15:11.536284] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.219 [2024-07-14 10:15:11.536320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.219 [2024-07-14 10:15:11.536348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.219 [2024-07-14 10:15:11.540321] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.219 [2024-07-14 10:15:11.540360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.219 [2024-07-14 10:15:11.540390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.219 [2024-07-14 10:15:11.543538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.219 [2024-07-14 10:15:11.543592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.219 [2024-07-14 10:15:11.543620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.219 [2024-07-14 10:15:11.546699] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.219 [2024-07-14 10:15:11.546752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.219 [2024-07-14 10:15:11.546780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.219 [2024-07-14 10:15:11.550029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.219 [2024-07-14 10:15:11.550080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.219 [2024-07-14 10:15:11.550109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.219 [2024-07-14 10:15:11.553352] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.219 [2024-07-14 10:15:11.553405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.219 [2024-07-14 10:15:11.553434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.219 [2024-07-14 10:15:11.557208] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.219 [2024-07-14 10:15:11.557281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.219 [2024-07-14 10:15:11.557310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.219 [2024-07-14 10:15:11.560858] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.219 [2024-07-14 10:15:11.560910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.219 [2024-07-14 10:15:11.560939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.219 [2024-07-14 10:15:11.564947] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.219 [2024-07-14 10:15:11.565001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.219 [2024-07-14 10:15:11.565029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.219 [2024-07-14 10:15:11.568581] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.219 [2024-07-14 10:15:11.568647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.219 [2024-07-14 10:15:11.568675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.219 [2024-07-14 10:15:11.571883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.219 [2024-07-14 10:15:11.571935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.219 [2024-07-14 10:15:11.571963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.219 [2024-07-14 10:15:11.575948] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.219 [2024-07-14 10:15:11.576001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.219 [2024-07-14 10:15:11.576029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.219 [2024-07-14 10:15:11.579398] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.219 [2024-07-14 10:15:11.579451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.219 [2024-07-14 10:15:11.579482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.219 [2024-07-14 10:15:11.582968] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.219 [2024-07-14 10:15:11.583005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.219 [2024-07-14 10:15:11.583034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.219 [2024-07-14 10:15:11.586453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.219 [2024-07-14 10:15:11.586504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.219 [2024-07-14 10:15:11.586532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.219 [2024-07-14 10:15:11.589307] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.219 [2024-07-14 10:15:11.589345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.219 [2024-07-14 10:15:11.589373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.219 [2024-07-14 10:15:11.592368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.219 [2024-07-14 10:15:11.592422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.219 [2024-07-14 10:15:11.592451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.219 [2024-07-14 10:15:11.596075] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.219 [2024-07-14 10:15:11.596130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.219 [2024-07-14 10:15:11.596158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.219 [2024-07-14 10:15:11.600390] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.220 [2024-07-14 10:15:11.600444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.220 [2024-07-14 10:15:11.600472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.220 [2024-07-14 10:15:11.604348] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.220 [2024-07-14 10:15:11.604401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.220 [2024-07-14 10:15:11.604430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.220 [2024-07-14 10:15:11.607921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.220 [2024-07-14 10:15:11.607974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.220 [2024-07-14 10:15:11.608003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.220 [2024-07-14 10:15:11.611807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.220 [2024-07-14 10:15:11.611861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.220 [2024-07-14 10:15:11.611889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.220 [2024-07-14 10:15:11.615881] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.220 [2024-07-14 10:15:11.615936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.220 [2024-07-14 10:15:11.615964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.220 [2024-07-14 10:15:11.619999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.220 [2024-07-14 10:15:11.620053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.220 [2024-07-14 10:15:11.620082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.220 [2024-07-14 10:15:11.624068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.220 [2024-07-14 10:15:11.624123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.220 [2024-07-14 10:15:11.624151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.220 [2024-07-14 10:15:11.627817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.220 [2024-07-14 10:15:11.627888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.220 [2024-07-14 10:15:11.627916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.220 [2024-07-14 10:15:11.631817] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.220 [2024-07-14 10:15:11.631886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.220 [2024-07-14 10:15:11.631914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.220 [2024-07-14 10:15:11.635867] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.220 [2024-07-14 10:15:11.635921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.220 [2024-07-14 10:15:11.635950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.220 [2024-07-14 10:15:11.639793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.220 [2024-07-14 10:15:11.639846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.220 [2024-07-14 10:15:11.639874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.220 [2024-07-14 10:15:11.643360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.220 [2024-07-14 10:15:11.643397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.220 [2024-07-14 10:15:11.643426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.220 [2024-07-14 10:15:11.646768] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.220 [2024-07-14 10:15:11.646857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.220 [2024-07-14 10:15:11.646887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.220 [2024-07-14 10:15:11.650196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.220 [2024-07-14 10:15:11.650257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.220 [2024-07-14 10:15:11.650285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.220 [2024-07-14 10:15:11.654130] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.220 [2024-07-14 10:15:11.654193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.220 [2024-07-14 10:15:11.654232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.220 [2024-07-14 10:15:11.658068] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.220 [2024-07-14 10:15:11.658123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.220 [2024-07-14 10:15:11.658151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.220 [2024-07-14 10:15:11.661567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.220 [2024-07-14 10:15:11.661636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.220 [2024-07-14 10:15:11.661664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.220 [2024-07-14 10:15:11.664949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.220 [2024-07-14 10:15:11.665001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.220 [2024-07-14 10:15:11.665029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.220 [2024-07-14 10:15:11.669101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.220 [2024-07-14 10:15:11.669154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.220 [2024-07-14 10:15:11.669183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.220 [2024-07-14 10:15:11.671907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.220 [2024-07-14 10:15:11.671961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.220 [2024-07-14 10:15:11.671989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.220 [2024-07-14 10:15:11.675439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.220 [2024-07-14 10:15:11.675491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.220 [2024-07-14 10:15:11.675519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.220 [2024-07-14 10:15:11.678883] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.220 [2024-07-14 10:15:11.678917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.220 [2024-07-14 10:15:11.678945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.220 [2024-07-14 10:15:11.682141] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.220 [2024-07-14 10:15:11.682192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.220 [2024-07-14 10:15:11.682230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.220 [2024-07-14 10:15:11.685267] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.220 [2024-07-14 10:15:11.685320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.220 [2024-07-14 10:15:11.685348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.221 [2024-07-14 10:15:11.688906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.221 [2024-07-14 10:15:11.688959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.221 [2024-07-14 10:15:11.688987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.221 [2024-07-14 10:15:11.692037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.221 [2024-07-14 10:15:11.692090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.221 [2024-07-14 10:15:11.692118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.221 [2024-07-14 10:15:11.695550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.221 [2024-07-14 10:15:11.695619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.221 [2024-07-14 10:15:11.695647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.221 [2024-07-14 10:15:11.699184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.221 [2024-07-14 10:15:11.699243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.221 [2024-07-14 10:15:11.699272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.221 [2024-07-14 10:15:11.701950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.221 [2024-07-14 10:15:11.701999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.221 [2024-07-14 10:15:11.702027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.221 [2024-07-14 10:15:11.705728] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.221 [2024-07-14 10:15:11.705781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.221 [2024-07-14 10:15:11.705810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.221 [2024-07-14 10:15:11.709425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.221 [2024-07-14 10:15:11.709479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.221 [2024-07-14 10:15:11.709507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.221 [2024-07-14 10:15:11.712783] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.221 [2024-07-14 10:15:11.712836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.221 [2024-07-14 10:15:11.712864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.221 [2024-07-14 10:15:11.715909] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.221 [2024-07-14 10:15:11.715962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.221 [2024-07-14 10:15:11.715990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.221 [2024-07-14 10:15:11.719724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.221 [2024-07-14 10:15:11.719776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.221 [2024-07-14 10:15:11.719805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.221 [2024-07-14 10:15:11.723030] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.221 [2024-07-14 10:15:11.723069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.221 [2024-07-14 10:15:11.723098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.221 [2024-07-14 10:15:11.727040] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.221 [2024-07-14 10:15:11.727080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.221 [2024-07-14 10:15:11.727094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.221 [2024-07-14 10:15:11.730186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.221 [2024-07-14 10:15:11.730253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.221 [2024-07-14 10:15:11.730282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.221 [2024-07-14 10:15:11.733989] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.221 [2024-07-14 10:15:11.734044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.221 [2024-07-14 10:15:11.734073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.221 [2024-07-14 10:15:11.736966] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.221 [2024-07-14 10:15:11.737021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.221 [2024-07-14 10:15:11.737050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.221 [2024-07-14 10:15:11.740373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.221 [2024-07-14 10:15:11.740427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.221 [2024-07-14 10:15:11.740455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.221 [2024-07-14 10:15:11.744189] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.221 [2024-07-14 10:15:11.744260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.221 [2024-07-14 10:15:11.744290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.221 [2024-07-14 10:15:11.747763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.221 [2024-07-14 10:15:11.747818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.221 [2024-07-14 10:15:11.747846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.221 [2024-07-14 10:15:11.751438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.221 [2024-07-14 10:15:11.751491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.221 [2024-07-14 10:15:11.751520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.221 [2024-07-14 10:15:11.755050] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.221 [2024-07-14 10:15:11.755088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.221 [2024-07-14 10:15:11.755117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.221 [2024-07-14 10:15:11.758639] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.221 [2024-07-14 10:15:11.758690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.221 [2024-07-14 10:15:11.758719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.221 [2024-07-14 10:15:11.762538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.221 [2024-07-14 10:15:11.762575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.221 [2024-07-14 10:15:11.762603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.481 [2024-07-14 10:15:11.766125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.481 [2024-07-14 10:15:11.766178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.481 [2024-07-14 10:15:11.766207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.481 [2024-07-14 10:15:11.768927] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.481 [2024-07-14 10:15:11.768980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.481 [2024-07-14 10:15:11.769009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.481 [2024-07-14 10:15:11.772176] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.481 [2024-07-14 10:15:11.772265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.481 [2024-07-14 10:15:11.772294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.481 [2024-07-14 10:15:11.775944] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.481 [2024-07-14 10:15:11.775999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.481 [2024-07-14 10:15:11.776028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.481 [2024-07-14 10:15:11.779292] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.481 [2024-07-14 10:15:11.779329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.481 [2024-07-14 10:15:11.779357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.481 [2024-07-14 10:15:11.783037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.481 [2024-07-14 10:15:11.783077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.481 [2024-07-14 10:15:11.783105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.481 [2024-07-14 10:15:11.786515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.481 [2024-07-14 10:15:11.786566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.481 [2024-07-14 10:15:11.786594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.481 [2024-07-14 10:15:11.790171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.481 [2024-07-14 10:15:11.790231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.481 [2024-07-14 10:15:11.790260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.481 [2024-07-14 10:15:11.793423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.481 [2024-07-14 10:15:11.793475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.481 [2024-07-14 10:15:11.793502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.481 [2024-07-14 10:15:11.797077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.481 [2024-07-14 10:15:11.797130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.481 [2024-07-14 10:15:11.797158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.481 [2024-07-14 10:15:11.800837] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.481 [2024-07-14 10:15:11.800891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.481 [2024-07-14 10:15:11.800920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.481 [2024-07-14 10:15:11.804997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.481 [2024-07-14 10:15:11.805051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.481 [2024-07-14 10:15:11.805064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.481 [2024-07-14 10:15:11.808494] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.481 [2024-07-14 10:15:11.808539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.481 [2024-07-14 10:15:11.808567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.481 [2024-07-14 10:15:11.811876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.481 [2024-07-14 10:15:11.811927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.482 [2024-07-14 10:15:11.811955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.482 [2024-07-14 10:15:11.815375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.482 [2024-07-14 10:15:11.815428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.482 [2024-07-14 10:15:11.815456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.482 [2024-07-14 10:15:11.818717] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.482 [2024-07-14 10:15:11.818766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.482 [2024-07-14 10:15:11.818793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.482 [2024-07-14 10:15:11.822080] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.482 [2024-07-14 10:15:11.822129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.482 [2024-07-14 10:15:11.822157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.482 [2024-07-14 10:15:11.825478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.482 [2024-07-14 10:15:11.825516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.482 [2024-07-14 10:15:11.825545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.482 [2024-07-14 10:15:11.829388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.482 [2024-07-14 10:15:11.829427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.482 [2024-07-14 10:15:11.829456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.482 [2024-07-14 10:15:11.832611] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.482 [2024-07-14 10:15:11.832665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.482 [2024-07-14 10:15:11.832693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.482 [2024-07-14 10:15:11.835524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.482 [2024-07-14 10:15:11.835577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.482 [2024-07-14 10:15:11.835606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.482 [2024-07-14 10:15:11.838505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.482 [2024-07-14 10:15:11.838541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.482 [2024-07-14 10:15:11.838569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.482 [2024-07-14 10:15:11.842037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.482 [2024-07-14 10:15:11.842090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.482 [2024-07-14 10:15:11.842118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.482 [2024-07-14 10:15:11.845408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.482 [2024-07-14 10:15:11.845459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.482 [2024-07-14 10:15:11.845487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.482 [2024-07-14 10:15:11.848793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.482 [2024-07-14 10:15:11.848843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.482 [2024-07-14 10:15:11.848872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.482 [2024-07-14 10:15:11.852089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.482 [2024-07-14 10:15:11.852141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.482 [2024-07-14 10:15:11.852169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.482 [2024-07-14 10:15:11.855583] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.482 [2024-07-14 10:15:11.855669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.482 [2024-07-14 10:15:11.855697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.482 [2024-07-14 10:15:11.859438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.482 [2024-07-14 10:15:11.859475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.482 [2024-07-14 10:15:11.859504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.482 [2024-07-14 10:15:11.863035] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.482 [2024-07-14 10:15:11.863075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.482 [2024-07-14 10:15:11.863103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.482 [2024-07-14 10:15:11.866347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.482 [2024-07-14 10:15:11.866398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.482 [2024-07-14 10:15:11.866426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.482 [2024-07-14 10:15:11.870059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.482 [2024-07-14 10:15:11.870112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.482 [2024-07-14 10:15:11.870140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.482 [2024-07-14 10:15:11.873309] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.482 [2024-07-14 10:15:11.873361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.482 [2024-07-14 10:15:11.873389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.482 [2024-07-14 10:15:11.877382] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.482 [2024-07-14 10:15:11.877435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.482 [2024-07-14 10:15:11.877463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.482 [2024-07-14 10:15:11.881135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.482 [2024-07-14 10:15:11.881187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.482 [2024-07-14 10:15:11.881227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.482 [2024-07-14 10:15:11.884184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.482 [2024-07-14 10:15:11.884246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.482 [2024-07-14 10:15:11.884274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.482 [2024-07-14 10:15:11.887958] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.482 [2024-07-14 10:15:11.888013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.482 [2024-07-14 10:15:11.888041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.482 [2024-07-14 10:15:11.891613] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.482 [2024-07-14 10:15:11.891667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.483 [2024-07-14 10:15:11.891695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.483 [2024-07-14 10:15:11.894556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.483 [2024-07-14 10:15:11.894606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.483 [2024-07-14 10:15:11.894634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.483 [2024-07-14 10:15:11.897624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.483 [2024-07-14 10:15:11.897673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.483 [2024-07-14 10:15:11.897701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.483 [2024-07-14 10:15:11.901066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.483 [2024-07-14 10:15:11.901118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.483 [2024-07-14 10:15:11.901146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.483 [2024-07-14 10:15:11.904294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.483 [2024-07-14 10:15:11.904346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.483 [2024-07-14 10:15:11.904374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.483 [2024-07-14 10:15:11.907822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.483 [2024-07-14 10:15:11.907874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.483 [2024-07-14 10:15:11.907903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.483 [2024-07-14 10:15:11.911303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.483 [2024-07-14 10:15:11.911355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.483 [2024-07-14 10:15:11.911383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.483 [2024-07-14 10:15:11.914243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.483 [2024-07-14 10:15:11.914276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.483 [2024-07-14 10:15:11.914304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.483 [2024-07-14 10:15:11.917617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.483 [2024-07-14 10:15:11.917671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.483 [2024-07-14 10:15:11.917699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.483 [2024-07-14 10:15:11.920793] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.483 [2024-07-14 10:15:11.920844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.483 [2024-07-14 10:15:11.920871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.483 [2024-07-14 10:15:11.924304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.483 [2024-07-14 10:15:11.924354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.483 [2024-07-14 10:15:11.924382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.483 [2024-07-14 10:15:11.928242] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.483 [2024-07-14 10:15:11.928293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.483 [2024-07-14 10:15:11.928322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.483 [2024-07-14 10:15:11.931619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.483 [2024-07-14 10:15:11.931670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.483 [2024-07-14 10:15:11.931697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.483 [2024-07-14 10:15:11.935274] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.483 [2024-07-14 10:15:11.935308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.483 [2024-07-14 10:15:11.935336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.483 [2024-07-14 10:15:11.938836] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.483 [2024-07-14 10:15:11.938871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.483 [2024-07-14 10:15:11.938899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.483 [2024-07-14 10:15:11.942159] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.483 [2024-07-14 10:15:11.942234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.483 [2024-07-14 10:15:11.942247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.483 [2024-07-14 10:15:11.945787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.483 [2024-07-14 10:15:11.945840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.483 [2024-07-14 10:15:11.945868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.483 [2024-07-14 10:15:11.948923] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.483 [2024-07-14 10:15:11.948976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.483 [2024-07-14 10:15:11.949003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.483 [2024-07-14 10:15:11.952397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.483 [2024-07-14 10:15:11.952434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.483 [2024-07-14 10:15:11.952463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.483 [2024-07-14 10:15:11.956191] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.483 [2024-07-14 10:15:11.956252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.483 [2024-07-14 10:15:11.956281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.483 [2024-07-14 10:15:11.959106] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.483 [2024-07-14 10:15:11.959174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.483 [2024-07-14 10:15:11.959202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.483 [2024-07-14 10:15:11.962181] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.483 [2024-07-14 10:15:11.962244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.483 [2024-07-14 10:15:11.962272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.484 [2024-07-14 10:15:11.965666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.484 [2024-07-14 10:15:11.965718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.484 [2024-07-14 10:15:11.965748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.484 [2024-07-14 10:15:11.968986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.484 [2024-07-14 10:15:11.969040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.484 [2024-07-14 10:15:11.969068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.484 [2024-07-14 10:15:11.972754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.484 [2024-07-14 10:15:11.972808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.484 [2024-07-14 10:15:11.972837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.484 [2024-07-14 10:15:11.976518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.484 [2024-07-14 10:15:11.976573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.484 [2024-07-14 10:15:11.976601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.484 [2024-07-14 10:15:11.980105] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.484 [2024-07-14 10:15:11.980159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.484 [2024-07-14 10:15:11.980187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.484 [2024-07-14 10:15:11.983445] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.484 [2024-07-14 10:15:11.983499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.484 [2024-07-14 10:15:11.983527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.484 [2024-07-14 10:15:11.986975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.484 [2024-07-14 10:15:11.987015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.484 [2024-07-14 10:15:11.987028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.484 [2024-07-14 10:15:11.990686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.484 [2024-07-14 10:15:11.990737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.484 [2024-07-14 10:15:11.990765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.484 [2024-07-14 10:15:11.994341] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.484 [2024-07-14 10:15:11.994377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.484 [2024-07-14 10:15:11.994406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.484 [2024-07-14 10:15:11.997761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.484 [2024-07-14 10:15:11.997815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.484 [2024-07-14 10:15:11.997843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.484 [2024-07-14 10:15:12.001264] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.484 [2024-07-14 10:15:12.001315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.484 [2024-07-14 10:15:12.001344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.484 [2024-07-14 10:15:12.004807] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.484 [2024-07-14 10:15:12.004859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.484 [2024-07-14 10:15:12.004887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.484 [2024-07-14 10:15:12.008411] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.484 [2024-07-14 10:15:12.008463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.484 [2024-07-14 10:15:12.008491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.484 [2024-07-14 10:15:12.011822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.484 [2024-07-14 10:15:12.011875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.484 [2024-07-14 10:15:12.011904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.484 [2024-07-14 10:15:12.015576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.484 [2024-07-14 10:15:12.015614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.484 [2024-07-14 10:15:12.015657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.484 [2024-07-14 10:15:12.018714] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.484 [2024-07-14 10:15:12.018766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.484 [2024-07-14 10:15:12.018794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.484 [2024-07-14 10:15:12.022364] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.484 [2024-07-14 10:15:12.022415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.484 [2024-07-14 10:15:12.022444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.484 [2024-07-14 10:15:12.026098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.484 [2024-07-14 10:15:12.026150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.484 [2024-07-14 10:15:12.026178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.743 [2024-07-14 10:15:12.029676] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.743 [2024-07-14 10:15:12.029728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.743 [2024-07-14 10:15:12.029757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.743 [2024-07-14 10:15:12.033417] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.743 [2024-07-14 10:15:12.033470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.743 [2024-07-14 10:15:12.033498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.743 [2024-07-14 10:15:12.036972] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.743 [2024-07-14 10:15:12.037025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.743 [2024-07-14 10:15:12.037054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.743 [2024-07-14 10:15:12.040869] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.743 [2024-07-14 10:15:12.040923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.743 [2024-07-14 10:15:12.040951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.743 [2024-07-14 10:15:12.043824] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.743 [2024-07-14 10:15:12.043877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.743 [2024-07-14 10:15:12.043906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.743 [2024-07-14 10:15:12.047051] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.743 [2024-07-14 10:15:12.047089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.743 [2024-07-14 10:15:12.047118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.743 [2024-07-14 10:15:12.049928] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.743 [2024-07-14 10:15:12.049977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.743 [2024-07-14 10:15:12.050006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.743 [2024-07-14 10:15:12.053678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.743 [2024-07-14 10:15:12.053730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.743 [2024-07-14 10:15:12.053758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.743 [2024-07-14 10:15:12.057576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.743 [2024-07-14 10:15:12.057644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.743 [2024-07-14 10:15:12.057671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.743 [2024-07-14 10:15:12.060655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.743 [2024-07-14 10:15:12.060706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.743 [2024-07-14 10:15:12.060735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.743 [2024-07-14 10:15:12.063987] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.743 [2024-07-14 10:15:12.064040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.743 [2024-07-14 10:15:12.064068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.743 [2024-07-14 10:15:12.067519] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.743 [2024-07-14 10:15:12.067571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.743 [2024-07-14 10:15:12.067599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.743 [2024-07-14 10:15:12.071077] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.743 [2024-07-14 10:15:12.071114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.743 [2024-07-14 10:15:12.071143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.743 [2024-07-14 10:15:12.074716] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.743 [2024-07-14 10:15:12.074766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.743 [2024-07-14 10:15:12.074795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.743 [2024-07-14 10:15:12.078180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.744 [2024-07-14 10:15:12.078241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.744 [2024-07-14 10:15:12.078270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.744 [2024-07-14 10:15:12.081551] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.744 [2024-07-14 10:15:12.081604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.744 [2024-07-14 10:15:12.081633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.744 [2024-07-14 10:15:12.084809] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.744 [2024-07-14 10:15:12.084863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.744 [2024-07-14 10:15:12.084891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.744 [2024-07-14 10:15:12.088277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.744 [2024-07-14 10:15:12.088329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.744 [2024-07-14 10:15:12.088357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.744 [2024-07-14 10:15:12.091916] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.744 [2024-07-14 10:15:12.091969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.744 [2024-07-14 10:15:12.091997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.744 [2024-07-14 10:15:12.095225] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.744 [2024-07-14 10:15:12.095315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.744 [2024-07-14 10:15:12.095346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.744 [2024-07-14 10:15:12.098975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.744 [2024-07-14 10:15:12.099014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.744 [2024-07-14 10:15:12.099042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.744 [2024-07-14 10:15:12.102216] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.744 [2024-07-14 10:15:12.102264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.744 [2024-07-14 10:15:12.102292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.744 [2024-07-14 10:15:12.105757] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.744 [2024-07-14 10:15:12.105811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.744 [2024-07-14 10:15:12.105839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.744 [2024-07-14 10:15:12.109427] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.744 [2024-07-14 10:15:12.109479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.744 [2024-07-14 10:15:12.109507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.744 [2024-07-14 10:15:12.113439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.744 [2024-07-14 10:15:12.113491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.744 [2024-07-14 10:15:12.113519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.744 [2024-07-14 10:15:12.116562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.744 [2024-07-14 10:15:12.116599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.744 [2024-07-14 10:15:12.116627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.744 [2024-07-14 10:15:12.120037] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.744 [2024-07-14 10:15:12.120090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.744 [2024-07-14 10:15:12.120118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.744 [2024-07-14 10:15:12.123425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.744 [2024-07-14 10:15:12.123477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.744 [2024-07-14 10:15:12.123505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.744 [2024-07-14 10:15:12.126504] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.744 [2024-07-14 10:15:12.126553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.744 [2024-07-14 10:15:12.126581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.744 [2024-07-14 10:15:12.129929] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.744 [2024-07-14 10:15:12.129979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.744 [2024-07-14 10:15:12.130007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.744 [2024-07-14 10:15:12.133771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.744 [2024-07-14 10:15:12.133823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.744 [2024-07-14 10:15:12.133851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.744 [2024-07-14 10:15:12.136437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.744 [2024-07-14 10:15:12.136490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.744 [2024-07-14 10:15:12.136518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.744 [2024-07-14 10:15:12.139949] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.744 [2024-07-14 10:15:12.140002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.744 [2024-07-14 10:15:12.140030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.744 [2024-07-14 10:15:12.143569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.744 [2024-07-14 10:15:12.143621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.744 [2024-07-14 10:15:12.143650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.744 [2024-07-14 10:15:12.147116] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.744 [2024-07-14 10:15:12.147229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.744 [2024-07-14 10:15:12.147243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.744 [2024-07-14 10:15:12.150511] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.744 [2024-07-14 10:15:12.150547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.744 [2024-07-14 10:15:12.150576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.744 [2024-07-14 10:15:12.153964] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.744 [2024-07-14 10:15:12.154017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.744 [2024-07-14 10:15:12.154046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.744 [2024-07-14 10:15:12.157800] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.744 [2024-07-14 10:15:12.157857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.744 [2024-07-14 10:15:12.157885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.744 [2024-07-14 10:15:12.161443] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.744 [2024-07-14 10:15:12.161496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.744 [2024-07-14 10:15:12.161524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.744 [2024-07-14 10:15:12.164930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.744 [2024-07-14 10:15:12.164982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.744 [2024-07-14 10:15:12.165010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.744 [2024-07-14 10:15:12.168195] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.744 [2024-07-14 10:15:12.168259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.744 [2024-07-14 10:15:12.168287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.744 [2024-07-14 10:15:12.171440] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.744 [2024-07-14 10:15:12.171478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.744 [2024-07-14 10:15:12.171506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.744 [2024-07-14 10:15:12.174598] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.744 [2024-07-14 10:15:12.174649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.744 [2024-07-14 10:15:12.174677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.744 [2024-07-14 10:15:12.177745] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.745 [2024-07-14 10:15:12.177798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.745 [2024-07-14 10:15:12.177826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.745 [2024-07-14 10:15:12.180732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.745 [2024-07-14 10:15:12.180783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.745 [2024-07-14 10:15:12.180812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:41.745 [2024-07-14 10:15:12.184495] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.745 [2024-07-14 10:15:12.184531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.745 [2024-07-14 10:15:12.184560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.745 [2024-07-14 10:15:12.187733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.745 [2024-07-14 10:15:12.187785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.745 [2024-07-14 10:15:12.187814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:41.745 [2024-07-14 10:15:12.190959] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x19a76f0) 00:22:41.745 [2024-07-14 10:15:12.190997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:41.745 [2024-07-14 10:15:12.191025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:41.745 00:22:41.745 Latency(us) 00:22:41.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.745 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:22:41.745 nvme0n1 : 2.00 8612.36 1076.54 0.00 0.00 1854.58 666.53 7923.90 00:22:41.745 =================================================================================================================== 00:22:41.745 Total : 8612.36 1076.54 0.00 0.00 1854.58 666.53 7923.90 00:22:41.745 0 00:22:41.745 10:15:12 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:41.745 10:15:12 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:41.745 10:15:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:41.745 10:15:12 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:41.745 | .driver_specific 00:22:41.745 | .nvme_error 00:22:41.745 | .status_code 00:22:41.745 | .command_transient_transport_error' 00:22:42.002 10:15:12 -- host/digest.sh@71 -- # (( 555 > 0 )) 00:22:42.002 10:15:12 -- host/digest.sh@73 -- # killprocess 97110 00:22:42.002 10:15:12 -- common/autotest_common.sh@926 -- # '[' -z 97110 ']' 00:22:42.002 10:15:12 -- common/autotest_common.sh@930 -- # kill -0 97110 00:22:42.002 10:15:12 -- common/autotest_common.sh@931 -- # uname 00:22:42.002 10:15:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:42.002 10:15:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97110 00:22:42.002 10:15:12 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:42.002 10:15:12 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:42.002 killing process with pid 97110 00:22:42.002 10:15:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97110' 00:22:42.002 10:15:12 -- common/autotest_common.sh@945 -- # kill 97110 00:22:42.002 Received shutdown signal, test time was about 2.000000 seconds 00:22:42.002 00:22:42.002 Latency(us) 00:22:42.002 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.002 =================================================================================================================== 00:22:42.002 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:42.002 10:15:12 -- common/autotest_common.sh@950 -- # wait 97110 00:22:42.259 10:15:12 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:22:42.259 10:15:12 -- host/digest.sh@54 -- # local rw bs qd 00:22:42.259 10:15:12 -- host/digest.sh@56 -- # rw=randwrite 00:22:42.259 10:15:12 -- host/digest.sh@56 -- # bs=4096 00:22:42.259 10:15:12 -- host/digest.sh@56 -- # qd=128 00:22:42.259 10:15:12 -- host/digest.sh@58 -- # bperfpid=97199 00:22:42.259 10:15:12 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:22:42.259 10:15:12 -- host/digest.sh@60 -- # waitforlisten 97199 /var/tmp/bperf.sock 00:22:42.259 10:15:12 -- common/autotest_common.sh@819 -- # '[' -z 97199 ']' 00:22:42.259 10:15:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:42.259 10:15:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:42.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:42.259 10:15:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:42.259 10:15:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:42.259 10:15:12 -- common/autotest_common.sh@10 -- # set +x 00:22:42.259 [2024-07-14 10:15:12.766248] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:42.259 [2024-07-14 10:15:12.766352] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97199 ] 00:22:42.517 [2024-07-14 10:15:12.903099] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:42.517 [2024-07-14 10:15:12.963438] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:43.449 10:15:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:43.449 10:15:13 -- common/autotest_common.sh@852 -- # return 0 00:22:43.449 10:15:13 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:43.449 10:15:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:43.449 10:15:13 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:43.449 10:15:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:43.449 10:15:13 -- common/autotest_common.sh@10 -- # set +x 00:22:43.449 10:15:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:43.449 10:15:13 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:43.449 10:15:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:43.706 nvme0n1 00:22:43.706 10:15:14 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:22:43.706 10:15:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:43.706 10:15:14 -- common/autotest_common.sh@10 -- # set +x 00:22:43.706 10:15:14 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:43.706 10:15:14 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:43.706 10:15:14 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:43.706 Running I/O for 2 seconds... 00:22:43.966 [2024-07-14 10:15:14.255305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f6890 00:22:43.966 [2024-07-14 10:15:14.255667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.966 [2024-07-14 10:15:14.255693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:43.966 [2024-07-14 10:15:14.266260] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190fd640 00:22:43.966 [2024-07-14 10:15:14.267151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.966 [2024-07-14 10:15:14.267196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:43.966 [2024-07-14 10:15:14.274688] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f0bc0 00:22:43.966 [2024-07-14 10:15:14.275782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:2527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.966 [2024-07-14 10:15:14.275828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:43.966 [2024-07-14 10:15:14.283677] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190fcdd0 00:22:43.966 [2024-07-14 10:15:14.283798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.966 [2024-07-14 10:15:14.283818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:43.966 [2024-07-14 10:15:14.292917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f4b08 00:22:43.966 [2024-07-14 10:15:14.293027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6047 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.966 [2024-07-14 10:15:14.293046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:43.966 [2024-07-14 10:15:14.302186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190eff18 00:22:43.966 [2024-07-14 10:15:14.302298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:9282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.966 [2024-07-14 10:15:14.302317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:43.966 [2024-07-14 10:15:14.311475] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190fdeb0 00:22:43.966 [2024-07-14 10:15:14.311560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:20693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.966 [2024-07-14 10:15:14.311579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:43.966 [2024-07-14 10:15:14.322129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190ddc00 00:22:43.966 [2024-07-14 10:15:14.323294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.966 [2024-07-14 10:15:14.323339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:43.966 [2024-07-14 10:15:14.331366] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f6cc8 00:22:43.966 [2024-07-14 10:15:14.332495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:10146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.966 [2024-07-14 10:15:14.332540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.966 [2024-07-14 10:15:14.342441] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f8618 00:22:43.966 [2024-07-14 10:15:14.343731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:6967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.966 [2024-07-14 10:15:14.343775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:43.966 [2024-07-14 10:15:14.349653] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f7970 00:22:43.966 [2024-07-14 10:15:14.349867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.966 [2024-07-14 10:15:14.349886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:43.966 [2024-07-14 10:15:14.361173] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e9168 00:22:43.966 [2024-07-14 10:15:14.362044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.966 [2024-07-14 10:15:14.362090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:43.966 [2024-07-14 10:15:14.369574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f2510 00:22:43.966 [2024-07-14 10:15:14.370515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.966 [2024-07-14 10:15:14.370561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:43.966 [2024-07-14 10:15:14.379215] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190ec840 00:22:43.966 [2024-07-14 10:15:14.380093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.966 [2024-07-14 10:15:14.380134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:43.966 [2024-07-14 10:15:14.389198] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e95a0 00:22:43.966 [2024-07-14 10:15:14.390677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.966 [2024-07-14 10:15:14.390721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:43.966 [2024-07-14 10:15:14.398560] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f8618 00:22:43.966 [2024-07-14 10:15:14.400055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.966 [2024-07-14 10:15:14.400099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:43.966 [2024-07-14 10:15:14.408024] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e84c0 00:22:43.966 [2024-07-14 10:15:14.409545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13836 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.966 [2024-07-14 10:15:14.409589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:43.966 [2024-07-14 10:15:14.417419] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f3a28 00:22:43.966 [2024-07-14 10:15:14.418675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.966 [2024-07-14 10:15:14.418719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:43.966 [2024-07-14 10:15:14.426881] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190dece0 00:22:43.966 [2024-07-14 10:15:14.427766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11540 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.966 [2024-07-14 10:15:14.427813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:43.966 [2024-07-14 10:15:14.436175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e6b70 00:22:43.966 [2024-07-14 10:15:14.437235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11703 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.966 [2024-07-14 10:15:14.437273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:43.966 [2024-07-14 10:15:14.446504] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e5a90 00:22:43.966 [2024-07-14 10:15:14.447260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.966 [2024-07-14 10:15:14.447299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:43.966 [2024-07-14 10:15:14.454921] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190dfdc0 00:22:43.966 [2024-07-14 10:15:14.456058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:805 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.966 [2024-07-14 10:15:14.456104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:43.966 [2024-07-14 10:15:14.466179] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e7c50 00:22:43.966 [2024-07-14 10:15:14.467008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.966 [2024-07-14 10:15:14.467040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:43.966 [2024-07-14 10:15:14.475930] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e27f0 00:22:43.966 [2024-07-14 10:15:14.476678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.966 [2024-07-14 10:15:14.476709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:43.966 [2024-07-14 10:15:14.485476] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f8618 00:22:43.966 [2024-07-14 10:15:14.486160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:15846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.966 [2024-07-14 10:15:14.486191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:43.966 [2024-07-14 10:15:14.494636] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f7da8 00:22:43.966 [2024-07-14 10:15:14.495404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.966 [2024-07-14 10:15:14.495435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:43.966 [2024-07-14 10:15:14.505629] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190fc998 00:22:43.966 [2024-07-14 10:15:14.506409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:43.966 [2024-07-14 10:15:14.506440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:44.225 [2024-07-14 10:15:14.515759] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f20d8 00:22:44.225 [2024-07-14 10:15:14.516054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:12254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.225 [2024-07-14 10:15:14.516077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:44.225 [2024-07-14 10:15:14.527509] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e3060 00:22:44.225 [2024-07-14 10:15:14.528355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:2521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.225 [2024-07-14 10:15:14.528399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:44.226 [2024-07-14 10:15:14.537771] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e1f80 00:22:44.226 [2024-07-14 10:15:14.538196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:2666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.226 [2024-07-14 10:15:14.538259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:44.226 [2024-07-14 10:15:14.545984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190fc998 00:22:44.226 [2024-07-14 10:15:14.546162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:11830 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.226 [2024-07-14 10:15:14.546181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:44.226 [2024-07-14 10:15:14.556427] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f1ca0 00:22:44.226 [2024-07-14 10:15:14.556975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:8833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.226 [2024-07-14 10:15:14.557012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:44.226 [2024-07-14 10:15:14.565888] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e49b0 00:22:44.226 [2024-07-14 10:15:14.567028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:15771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.226 [2024-07-14 10:15:14.567063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:44.226 [2024-07-14 10:15:14.575521] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e9168 00:22:44.226 [2024-07-14 10:15:14.575841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:16525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.226 [2024-07-14 10:15:14.575865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:44.226 [2024-07-14 10:15:14.585147] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e6738 00:22:44.226 [2024-07-14 10:15:14.585741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.226 [2024-07-14 10:15:14.585786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:44.226 [2024-07-14 10:15:14.594413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e6300 00:22:44.226 [2024-07-14 10:15:14.594865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.226 [2024-07-14 10:15:14.594890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:44.226 [2024-07-14 10:15:14.603595] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190edd58 00:22:44.226 [2024-07-14 10:15:14.604013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.226 [2024-07-14 10:15:14.604053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:44.226 [2024-07-14 10:15:14.612785] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190de470 00:22:44.226 [2024-07-14 10:15:14.613178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:19021 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.226 [2024-07-14 10:15:14.613258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:44.226 [2024-07-14 10:15:14.622035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190feb58 00:22:44.226 [2024-07-14 10:15:14.622441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:9638 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.226 [2024-07-14 10:15:14.622466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:44.226 [2024-07-14 10:15:14.631450] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f57b0 00:22:44.226 [2024-07-14 10:15:14.631791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.226 [2024-07-14 10:15:14.631816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:44.226 [2024-07-14 10:15:14.640652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190feb58 00:22:44.226 [2024-07-14 10:15:14.640970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.226 [2024-07-14 10:15:14.640995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:44.226 [2024-07-14 10:15:14.649886] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190de470 00:22:44.226 [2024-07-14 10:15:14.650240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.226 [2024-07-14 10:15:14.650264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:44.226 [2024-07-14 10:15:14.660631] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f0788 00:22:44.226 [2024-07-14 10:15:14.661615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:10360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.226 [2024-07-14 10:15:14.661666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:44.226 [2024-07-14 10:15:14.670861] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e99d8 00:22:44.226 [2024-07-14 10:15:14.672029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12322 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.226 [2024-07-14 10:15:14.672071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:44.226 [2024-07-14 10:15:14.677891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f20d8 00:22:44.226 [2024-07-14 10:15:14.678209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:20719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.226 [2024-07-14 10:15:14.678240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:44.226 [2024-07-14 10:15:14.688046] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190ed0b0 00:22:44.226 [2024-07-14 10:15:14.688487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:13849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.226 [2024-07-14 10:15:14.688511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:44.226 [2024-07-14 10:15:14.697442] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f7da8 00:22:44.226 [2024-07-14 10:15:14.698045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:13446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.226 [2024-07-14 10:15:14.698077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:44.226 [2024-07-14 10:15:14.706587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190ed4e8 00:22:44.226 [2024-07-14 10:15:14.707171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.226 [2024-07-14 10:15:14.707212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:44.226 [2024-07-14 10:15:14.715712] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f20d8 00:22:44.226 [2024-07-14 10:15:14.716277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.226 [2024-07-14 10:15:14.716306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:44.226 [2024-07-14 10:15:14.724868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f31b8 00:22:44.226 [2024-07-14 10:15:14.725397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:15544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.226 [2024-07-14 10:15:14.725436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:44.226 [2024-07-14 10:15:14.734131] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190fa3a0 00:22:44.226 [2024-07-14 10:15:14.734635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:22657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.226 [2024-07-14 10:15:14.734664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:44.226 [2024-07-14 10:15:14.743339] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f6cc8 00:22:44.226 [2024-07-14 10:15:14.743790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.226 [2024-07-14 10:15:14.743838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:44.226 [2024-07-14 10:15:14.752868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190fa3a0 00:22:44.226 [2024-07-14 10:15:14.753322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:18259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.226 [2024-07-14 10:15:14.753349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:44.226 [2024-07-14 10:15:14.761877] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e7c50 00:22:44.226 [2024-07-14 10:15:14.762963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.226 [2024-07-14 10:15:14.763023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:44.226 [2024-07-14 10:15:14.771472] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e0a68 00:22:44.226 [2024-07-14 10:15:14.771832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:11908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.226 [2024-07-14 10:15:14.771855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:44.485 [2024-07-14 10:15:14.782934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190fd208 00:22:44.486 [2024-07-14 10:15:14.783991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:25290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.486 [2024-07-14 10:15:14.784037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:44.486 [2024-07-14 10:15:14.789979] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190eee38 00:22:44.486 [2024-07-14 10:15:14.790119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.486 [2024-07-14 10:15:14.790138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:44.486 [2024-07-14 10:15:14.801335] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f6cc8 00:22:44.486 [2024-07-14 10:15:14.802115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.486 [2024-07-14 10:15:14.802175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:44.486 [2024-07-14 10:15:14.809613] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e4140 00:22:44.486 [2024-07-14 10:15:14.810934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14877 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.486 [2024-07-14 10:15:14.810965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:44.486 [2024-07-14 10:15:14.819456] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e6300 00:22:44.486 [2024-07-14 10:15:14.820275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.486 [2024-07-14 10:15:14.820328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:44.486 [2024-07-14 10:15:14.829324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190eb760 00:22:44.486 [2024-07-14 10:15:14.830725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.486 [2024-07-14 10:15:14.830770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:44.486 [2024-07-14 10:15:14.838536] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e99d8 00:22:44.486 [2024-07-14 10:15:14.839938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.486 [2024-07-14 10:15:14.839981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:44.486 [2024-07-14 10:15:14.847960] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190ff3c8 00:22:44.486 [2024-07-14 10:15:14.849451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.486 [2024-07-14 10:15:14.849496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:44.486 [2024-07-14 10:15:14.859100] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f8a50 00:22:44.486 [2024-07-14 10:15:14.860899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.486 [2024-07-14 10:15:14.860950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:44.486 [2024-07-14 10:15:14.870676] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190ef270 00:22:44.486 [2024-07-14 10:15:14.872320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.486 [2024-07-14 10:15:14.872389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:44.486 [2024-07-14 10:15:14.880847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f7100 00:22:44.486 [2024-07-14 10:15:14.882177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.486 [2024-07-14 10:15:14.882257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:44.486 [2024-07-14 10:15:14.890129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e3498 00:22:44.486 [2024-07-14 10:15:14.891462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:7033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.486 [2024-07-14 10:15:14.891507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:44.486 [2024-07-14 10:15:14.899429] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e3d08 00:22:44.486 [2024-07-14 10:15:14.900557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:19098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.486 [2024-07-14 10:15:14.900602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:44.486 [2024-07-14 10:15:14.910588] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190ee5c8 00:22:44.486 [2024-07-14 10:15:14.911733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.486 [2024-07-14 10:15:14.911776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:44.486 [2024-07-14 10:15:14.917572] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e8d30 00:22:44.486 [2024-07-14 10:15:14.917786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.486 [2024-07-14 10:15:14.917805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:44.486 [2024-07-14 10:15:14.928313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190df988 00:22:44.486 [2024-07-14 10:15:14.929025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22917 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.486 [2024-07-14 10:15:14.929055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:44.486 [2024-07-14 10:15:14.937242] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190ff3c8 00:22:44.486 [2024-07-14 10:15:14.938440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.486 [2024-07-14 10:15:14.938484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:44.486 [2024-07-14 10:15:14.946906] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e1710 00:22:44.486 [2024-07-14 10:15:14.947365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:12496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.486 [2024-07-14 10:15:14.947393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:44.486 [2024-07-14 10:15:14.958162] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f35f0 00:22:44.486 [2024-07-14 10:15:14.959147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.486 [2024-07-14 10:15:14.959192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:44.486 [2024-07-14 10:15:14.967563] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e84c0 00:22:44.486 [2024-07-14 10:15:14.968999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.486 [2024-07-14 10:15:14.969044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:44.486 [2024-07-14 10:15:14.978030] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e3060 00:22:44.486 [2024-07-14 10:15:14.978787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.486 [2024-07-14 10:15:14.978842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:44.486 [2024-07-14 10:15:14.988514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e5658 00:22:44.486 [2024-07-14 10:15:14.989820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.486 [2024-07-14 10:15:14.989867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:44.486 [2024-07-14 10:15:14.999868] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190fe720 00:22:44.486 [2024-07-14 10:15:15.000320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.486 [2024-07-14 10:15:15.000353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:44.486 [2024-07-14 10:15:15.012648] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190ea248 00:22:44.486 [2024-07-14 10:15:15.013681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.486 [2024-07-14 10:15:15.013727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:44.486 [2024-07-14 10:15:15.020353] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e9168 00:22:44.486 [2024-07-14 10:15:15.020467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:17828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.486 [2024-07-14 10:15:15.020486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:44.486 [2024-07-14 10:15:15.030788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190fb048 00:22:44.486 [2024-07-14 10:15:15.031118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:13961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.486 [2024-07-14 10:15:15.031143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:44.746 [2024-07-14 10:15:15.041013] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f6458 00:22:44.746 [2024-07-14 10:15:15.041528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.746 [2024-07-14 10:15:15.041578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:44.746 [2024-07-14 10:15:15.051477] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f0bc0 00:22:44.746 [2024-07-14 10:15:15.052854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:11548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.746 [2024-07-14 10:15:15.052899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:44.746 [2024-07-14 10:15:15.061941] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e7c50 00:22:44.746 [2024-07-14 10:15:15.062599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:5978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.746 [2024-07-14 10:15:15.062629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:44.746 [2024-07-14 10:15:15.074175] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f6020 00:22:44.746 [2024-07-14 10:15:15.075453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:25014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.746 [2024-07-14 10:15:15.075497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:44.746 [2024-07-14 10:15:15.081257] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190fac10 00:22:44.746 [2024-07-14 10:15:15.081430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:24872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.746 [2024-07-14 10:15:15.081450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:44.746 [2024-07-14 10:15:15.091013] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190eb328 00:22:44.746 [2024-07-14 10:15:15.091187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.746 [2024-07-14 10:15:15.091206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:44.746 [2024-07-14 10:15:15.100879] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f7da8 00:22:44.746 [2024-07-14 10:15:15.101230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:13010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.746 [2024-07-14 10:15:15.101255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:44.746 [2024-07-14 10:15:15.111673] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e5220 00:22:44.746 [2024-07-14 10:15:15.112972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.746 [2024-07-14 10:15:15.113017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:44.746 [2024-07-14 10:15:15.121783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190ee190 00:22:44.746 [2024-07-14 10:15:15.122319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.746 [2024-07-14 10:15:15.122350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:44.746 [2024-07-14 10:15:15.132080] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190ebfd0 00:22:44.746 [2024-07-14 10:15:15.132736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.746 [2024-07-14 10:15:15.132767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:44.746 [2024-07-14 10:15:15.141891] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f6890 00:22:44.746 [2024-07-14 10:15:15.143273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.746 [2024-07-14 10:15:15.143334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:44.746 [2024-07-14 10:15:15.152051] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f2948 00:22:44.746 [2024-07-14 10:15:15.152971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:23201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.746 [2024-07-14 10:15:15.153017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:44.746 [2024-07-14 10:15:15.161681] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e1710 00:22:44.746 [2024-07-14 10:15:15.162984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.746 [2024-07-14 10:15:15.163014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:44.746 [2024-07-14 10:15:15.171818] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f1430 00:22:44.746 [2024-07-14 10:15:15.173022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:18861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.746 [2024-07-14 10:15:15.173066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:44.746 [2024-07-14 10:15:15.181644] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190ed4e8 00:22:44.746 [2024-07-14 10:15:15.182854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.746 [2024-07-14 10:15:15.182883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:44.746 [2024-07-14 10:15:15.191335] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f96f8 00:22:44.746 [2024-07-14 10:15:15.192460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:6872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.746 [2024-07-14 10:15:15.192504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:44.746 [2024-07-14 10:15:15.200552] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190ebfd0 00:22:44.746 [2024-07-14 10:15:15.201661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.746 [2024-07-14 10:15:15.201705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:44.746 [2024-07-14 10:15:15.210073] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190fa3a0 00:22:44.746 [2024-07-14 10:15:15.211020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:24267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.746 [2024-07-14 10:15:15.211053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:44.746 [2024-07-14 10:15:15.221154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f7100 00:22:44.746 [2024-07-14 10:15:15.221873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.746 [2024-07-14 10:15:15.221904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:44.746 [2024-07-14 10:15:15.230361] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e88f8 00:22:44.746 [2024-07-14 10:15:15.231053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.746 [2024-07-14 10:15:15.231083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:44.746 [2024-07-14 10:15:15.238762] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190ea248 00:22:44.746 [2024-07-14 10:15:15.239774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:18728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.746 [2024-07-14 10:15:15.239819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:44.746 [2024-07-14 10:15:15.248183] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e23b8 00:22:44.746 [2024-07-14 10:15:15.249274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.746 [2024-07-14 10:15:15.249327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:44.746 [2024-07-14 10:15:15.258578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f35f0 00:22:44.746 [2024-07-14 10:15:15.259264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.746 [2024-07-14 10:15:15.259288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:44.746 [2024-07-14 10:15:15.267922] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190df118 00:22:44.746 [2024-07-14 10:15:15.268677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.746 [2024-07-14 10:15:15.268710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:44.746 [2024-07-14 10:15:15.277120] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190fbcf0 00:22:44.746 [2024-07-14 10:15:15.277826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.746 [2024-07-14 10:15:15.277859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:44.746 [2024-07-14 10:15:15.286133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f6020 00:22:44.746 [2024-07-14 10:15:15.287453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:44.746 [2024-07-14 10:15:15.287499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:45.006 [2024-07-14 10:15:15.295659] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190fda78 00:22:45.006 [2024-07-14 10:15:15.296308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.006 [2024-07-14 10:15:15.296340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:45.006 [2024-07-14 10:15:15.305816] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e0630 00:22:45.006 [2024-07-14 10:15:15.306650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:16022 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.006 [2024-07-14 10:15:15.306696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.006 [2024-07-14 10:15:15.315198] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190df550 00:22:45.006 [2024-07-14 10:15:15.315978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.006 [2024-07-14 10:15:15.316009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:45.006 [2024-07-14 10:15:15.323475] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e2c28 00:22:45.006 [2024-07-14 10:15:15.324570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.006 [2024-07-14 10:15:15.324615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:45.006 [2024-07-14 10:15:15.332708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e2c28 00:22:45.006 [2024-07-14 10:15:15.333857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.006 [2024-07-14 10:15:15.333901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.006 [2024-07-14 10:15:15.342038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e2c28 00:22:45.006 [2024-07-14 10:15:15.343345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:18630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.006 [2024-07-14 10:15:15.343374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.006 [2024-07-14 10:15:15.350564] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e4578 00:22:45.006 [2024-07-14 10:15:15.351610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:20081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.006 [2024-07-14 10:15:15.351654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:45.006 [2024-07-14 10:15:15.360182] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e3d08 00:22:45.006 [2024-07-14 10:15:15.361451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.006 [2024-07-14 10:15:15.361495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:45.006 [2024-07-14 10:15:15.369818] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e7c50 00:22:45.006 [2024-07-14 10:15:15.370992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.006 [2024-07-14 10:15:15.371022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:45.006 [2024-07-14 10:15:15.380301] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190dfdc0 00:22:45.006 [2024-07-14 10:15:15.380869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.006 [2024-07-14 10:15:15.380899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:45.006 [2024-07-14 10:15:15.392014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e6b70 00:22:45.006 [2024-07-14 10:15:15.393231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.006 [2024-07-14 10:15:15.393299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:45.006 [2024-07-14 10:15:15.399071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f7da8 00:22:45.006 [2024-07-14 10:15:15.399419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.006 [2024-07-14 10:15:15.399444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:45.006 [2024-07-14 10:15:15.410368] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190de8a8 00:22:45.006 [2024-07-14 10:15:15.411204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:21666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.006 [2024-07-14 10:15:15.411253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:45.006 [2024-07-14 10:15:15.418568] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f0bc0 00:22:45.006 [2024-07-14 10:15:15.419570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:24775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.006 [2024-07-14 10:15:15.419614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:45.006 [2024-07-14 10:15:15.427927] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f2948 00:22:45.006 [2024-07-14 10:15:15.429197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.006 [2024-07-14 10:15:15.429251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:45.006 [2024-07-14 10:15:15.437350] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e8d30 00:22:45.006 [2024-07-14 10:15:15.438669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.006 [2024-07-14 10:15:15.438712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:45.006 [2024-07-14 10:15:15.446519] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e4578 00:22:45.006 [2024-07-14 10:15:15.447696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:14244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.006 [2024-07-14 10:15:15.447741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:45.006 [2024-07-14 10:15:15.456380] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f6458 00:22:45.006 [2024-07-14 10:15:15.457338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.006 [2024-07-14 10:15:15.457384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:45.006 [2024-07-14 10:15:15.466193] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f2510 00:22:45.006 [2024-07-14 10:15:15.467154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:17803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.006 [2024-07-14 10:15:15.467234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:45.006 [2024-07-14 10:15:15.476263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190fe720 00:22:45.006 [2024-07-14 10:15:15.477462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:10079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.006 [2024-07-14 10:15:15.477507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:45.006 [2024-07-14 10:15:15.487305] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190edd58 00:22:45.006 [2024-07-14 10:15:15.487858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:8982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.006 [2024-07-14 10:15:15.487888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:45.006 [2024-07-14 10:15:15.499652] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e73e0 00:22:45.006 [2024-07-14 10:15:15.500843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.006 [2024-07-14 10:15:15.500886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:45.006 [2024-07-14 10:15:15.506669] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e9e10 00:22:45.006 [2024-07-14 10:15:15.506784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.006 [2024-07-14 10:15:15.506813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:45.006 [2024-07-14 10:15:15.516317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190ef6a8 00:22:45.006 [2024-07-14 10:15:15.516593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.006 [2024-07-14 10:15:15.516633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:45.006 [2024-07-14 10:15:15.527036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f4f40 00:22:45.007 [2024-07-14 10:15:15.527358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:13363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.007 [2024-07-14 10:15:15.527387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:45.007 [2024-07-14 10:15:15.540032] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190eaab8 00:22:45.007 [2024-07-14 10:15:15.541696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.007 [2024-07-14 10:15:15.541739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:45.007 [2024-07-14 10:15:15.550485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f1ca0 00:22:45.266 [2024-07-14 10:15:15.552061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.266 [2024-07-14 10:15:15.552105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.266 [2024-07-14 10:15:15.560989] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190ef270 00:22:45.266 [2024-07-14 10:15:15.562627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.266 [2024-07-14 10:15:15.562672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:45.266 [2024-07-14 10:15:15.570352] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f57b0 00:22:45.266 [2024-07-14 10:15:15.571781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.266 [2024-07-14 10:15:15.571826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:45.266 [2024-07-14 10:15:15.579698] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e12d8 00:22:45.266 [2024-07-14 10:15:15.581074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.266 [2024-07-14 10:15:15.581118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:45.266 [2024-07-14 10:15:15.589276] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e0ea0 00:22:45.266 [2024-07-14 10:15:15.590875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.267 [2024-07-14 10:15:15.590904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:45.267 [2024-07-14 10:15:15.597930] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e4578 00:22:45.267 [2024-07-14 10:15:15.599214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:20379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.267 [2024-07-14 10:15:15.599288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:45.267 [2024-07-14 10:15:15.608583] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e95a0 00:22:45.267 [2024-07-14 10:15:15.609546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.267 [2024-07-14 10:15:15.609590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.267 [2024-07-14 10:15:15.615716] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190de470 00:22:45.267 [2024-07-14 10:15:15.615791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.267 [2024-07-14 10:15:15.615810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:45.267 [2024-07-14 10:15:15.625763] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190ebb98 00:22:45.267 [2024-07-14 10:15:15.625979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.267 [2024-07-14 10:15:15.625997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:45.267 [2024-07-14 10:15:15.635014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f7da8 00:22:45.267 [2024-07-14 10:15:15.635384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.267 [2024-07-14 10:15:15.635404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:45.267 [2024-07-14 10:15:15.644164] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e9e10 00:22:45.267 [2024-07-14 10:15:15.644503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:20949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.267 [2024-07-14 10:15:15.644528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:45.267 [2024-07-14 10:15:15.653514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190fac10 00:22:45.267 [2024-07-14 10:15:15.653816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:16130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.267 [2024-07-14 10:15:15.653840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:45.267 [2024-07-14 10:15:15.662651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190ee190 00:22:45.267 [2024-07-14 10:15:15.662937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:23134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.267 [2024-07-14 10:15:15.662961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:45.267 [2024-07-14 10:15:15.671814] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190ef6a8 00:22:45.267 [2024-07-14 10:15:15.672081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.267 [2024-07-14 10:15:15.672132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:45.267 [2024-07-14 10:15:15.681238] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190fdeb0 00:22:45.267 [2024-07-14 10:15:15.681462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.267 [2024-07-14 10:15:15.681481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:45.267 [2024-07-14 10:15:15.690336] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190ef6a8 00:22:45.267 [2024-07-14 10:15:15.690535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:13468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.267 [2024-07-14 10:15:15.690554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:45.267 [2024-07-14 10:15:15.699446] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190ee190 00:22:45.267 [2024-07-14 10:15:15.699661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.267 [2024-07-14 10:15:15.699685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:45.267 [2024-07-14 10:15:15.711098] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f92c0 00:22:45.267 [2024-07-14 10:15:15.712232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:2247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.267 [2024-07-14 10:15:15.712264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:45.267 [2024-07-14 10:15:15.718165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190eaab8 00:22:45.267 [2024-07-14 10:15:15.718338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.267 [2024-07-14 10:15:15.718358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:45.267 [2024-07-14 10:15:15.729502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f5378 00:22:45.267 [2024-07-14 10:15:15.730334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:25402 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.267 [2024-07-14 10:15:15.730379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:45.267 [2024-07-14 10:15:15.737681] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f96f8 00:22:45.267 [2024-07-14 10:15:15.738965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.267 [2024-07-14 10:15:15.738995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:45.267 [2024-07-14 10:15:15.747433] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f8618 00:22:45.267 [2024-07-14 10:15:15.748601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:25031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.267 [2024-07-14 10:15:15.748645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:45.267 [2024-07-14 10:15:15.756301] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190ddc00 00:22:45.267 [2024-07-14 10:15:15.757454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.267 [2024-07-14 10:15:15.757498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:45.267 [2024-07-14 10:15:15.765471] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f7da8 00:22:45.267 [2024-07-14 10:15:15.766574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.267 [2024-07-14 10:15:15.766617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:45.267 [2024-07-14 10:15:15.774637] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f31b8 00:22:45.267 [2024-07-14 10:15:15.775733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.267 [2024-07-14 10:15:15.775776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:45.267 [2024-07-14 10:15:15.783761] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190eaef0 00:22:45.267 [2024-07-14 10:15:15.784834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.267 [2024-07-14 10:15:15.784861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:45.267 [2024-07-14 10:15:15.792972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190fc128 00:22:45.267 [2024-07-14 10:15:15.794063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:20054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.267 [2024-07-14 10:15:15.794107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:45.267 [2024-07-14 10:15:15.802169] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f4b08 00:22:45.267 [2024-07-14 10:15:15.803287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.267 [2024-07-14 10:15:15.803330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:45.526 [2024-07-14 10:15:15.813122] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f2d80 00:22:45.526 [2024-07-14 10:15:15.813939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:18666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.526 [2024-07-14 10:15:15.813969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:45.526 [2024-07-14 10:15:15.821623] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f4f40 00:22:45.526 [2024-07-14 10:15:15.822553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.526 [2024-07-14 10:15:15.822598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:45.526 [2024-07-14 10:15:15.831001] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f2948 00:22:45.526 [2024-07-14 10:15:15.831964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.526 [2024-07-14 10:15:15.832010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:45.526 [2024-07-14 10:15:15.841749] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f2948 00:22:45.526 [2024-07-14 10:15:15.842614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.526 [2024-07-14 10:15:15.842659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:45.526 [2024-07-14 10:15:15.850140] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190fb8b8 00:22:45.526 [2024-07-14 10:15:15.851189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.526 [2024-07-14 10:15:15.851265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:45.526 [2024-07-14 10:15:15.860748] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f7970 00:22:45.526 [2024-07-14 10:15:15.861652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:15742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.526 [2024-07-14 10:15:15.861696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:45.526 [2024-07-14 10:15:15.869563] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e4578 00:22:45.526 [2024-07-14 10:15:15.870930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.527 [2024-07-14 10:15:15.870958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:45.527 [2024-07-14 10:15:15.879339] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e1f80 00:22:45.527 [2024-07-14 10:15:15.879961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.527 [2024-07-14 10:15:15.879991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:45.527 [2024-07-14 10:15:15.890155] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f4b08 00:22:45.527 [2024-07-14 10:15:15.891228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:2559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.527 [2024-07-14 10:15:15.891261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:45.527 [2024-07-14 10:15:15.897342] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190dece0 00:22:45.527 [2024-07-14 10:15:15.897527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3632 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.527 [2024-07-14 10:15:15.897546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:45.527 [2024-07-14 10:15:15.908728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190de470 00:22:45.527 [2024-07-14 10:15:15.909566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.527 [2024-07-14 10:15:15.909641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:45.527 [2024-07-14 10:15:15.917601] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190ec408 00:22:45.527 [2024-07-14 10:15:15.918914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.527 [2024-07-14 10:15:15.918943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:45.527 [2024-07-14 10:15:15.927271] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f7970 00:22:45.527 [2024-07-14 10:15:15.927826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.527 [2024-07-14 10:15:15.927870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:45.527 [2024-07-14 10:15:15.937017] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e5ec8 00:22:45.527 [2024-07-14 10:15:15.937767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.527 [2024-07-14 10:15:15.937799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:45.527 [2024-07-14 10:15:15.946717] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f7da8 00:22:45.527 [2024-07-14 10:15:15.947724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.527 [2024-07-14 10:15:15.947768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:45.527 [2024-07-14 10:15:15.956497] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190fd208 00:22:45.527 [2024-07-14 10:15:15.957862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.527 [2024-07-14 10:15:15.957908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:45.527 [2024-07-14 10:15:15.966051] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190ddc00 00:22:45.527 [2024-07-14 10:15:15.966858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:1796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.527 [2024-07-14 10:15:15.966889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:45.527 [2024-07-14 10:15:15.975520] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f9b30 00:22:45.527 [2024-07-14 10:15:15.975969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.527 [2024-07-14 10:15:15.975996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:45.527 [2024-07-14 10:15:15.986985] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f46d0 00:22:45.527 [2024-07-14 10:15:15.988079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.527 [2024-07-14 10:15:15.988122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:45.527 [2024-07-14 10:15:15.995349] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190eaef0 00:22:45.527 [2024-07-14 10:15:15.996819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.527 [2024-07-14 10:15:15.996865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:45.527 [2024-07-14 10:15:16.004994] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e6fa8 00:22:45.527 [2024-07-14 10:15:16.006148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.527 [2024-07-14 10:15:16.006193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:45.527 [2024-07-14 10:15:16.014899] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190ed4e8 00:22:45.527 [2024-07-14 10:15:16.016183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.527 [2024-07-14 10:15:16.016235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:45.527 [2024-07-14 10:15:16.024267] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f35f0 00:22:45.527 [2024-07-14 10:15:16.025746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.527 [2024-07-14 10:15:16.025793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:45.527 [2024-07-14 10:15:16.032722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190fd208 00:22:45.527 [2024-07-14 10:15:16.033845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:21357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.527 [2024-07-14 10:15:16.033891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:45.527 [2024-07-14 10:15:16.042121] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f5378 00:22:45.527 [2024-07-14 10:15:16.042899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.527 [2024-07-14 10:15:16.042931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:45.527 [2024-07-14 10:15:16.051500] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f7da8 00:22:45.527 [2024-07-14 10:15:16.052182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.527 [2024-07-14 10:15:16.052225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:45.527 [2024-07-14 10:15:16.060670] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190fb8b8 00:22:45.527 [2024-07-14 10:15:16.061406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.527 [2024-07-14 10:15:16.061437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:45.527 [2024-07-14 10:15:16.069857] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f7538 00:22:45.527 [2024-07-14 10:15:16.070600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:11701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.527 [2024-07-14 10:15:16.070631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:45.786 [2024-07-14 10:15:16.079012] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e1710 00:22:45.786 [2024-07-14 10:15:16.079765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:11475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.786 [2024-07-14 10:15:16.079796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:45.786 [2024-07-14 10:15:16.088295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190df118 00:22:45.786 [2024-07-14 10:15:16.089085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:6577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.786 [2024-07-14 10:15:16.089145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:45.786 [2024-07-14 10:15:16.097316] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e5658 00:22:45.786 [2024-07-14 10:15:16.098264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.786 [2024-07-14 10:15:16.098318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:45.786 [2024-07-14 10:15:16.106587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f31b8 00:22:45.786 [2024-07-14 10:15:16.108069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.786 [2024-07-14 10:15:16.108114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:45.786 [2024-07-14 10:15:16.115977] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e38d0 00:22:45.786 [2024-07-14 10:15:16.117576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.786 [2024-07-14 10:15:16.117621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:45.786 [2024-07-14 10:15:16.126145] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e0ea0 00:22:45.786 [2024-07-14 10:15:16.127259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.786 [2024-07-14 10:15:16.127328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:45.786 [2024-07-14 10:15:16.134685] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190fb048 00:22:45.786 [2024-07-14 10:15:16.135259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.786 [2024-07-14 10:15:16.135292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:45.786 [2024-07-14 10:15:16.144689] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190dece0 00:22:45.786 [2024-07-14 10:15:16.145495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:19568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.786 [2024-07-14 10:15:16.145539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:45.786 [2024-07-14 10:15:16.154054] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190fc560 00:22:45.786 [2024-07-14 10:15:16.154618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.787 [2024-07-14 10:15:16.154649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:45.787 [2024-07-14 10:15:16.163383] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190fd640 00:22:45.787 [2024-07-14 10:15:16.163886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.787 [2024-07-14 10:15:16.163909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:45.787 [2024-07-14 10:15:16.172965] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f6cc8 00:22:45.787 [2024-07-14 10:15:16.173459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:15518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.787 [2024-07-14 10:15:16.173493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:45.787 [2024-07-14 10:15:16.183909] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190e8d30 00:22:45.787 [2024-07-14 10:15:16.184361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.787 [2024-07-14 10:15:16.184403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:45.787 [2024-07-14 10:15:16.194830] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f1430 00:22:45.787 [2024-07-14 10:15:16.195205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:12966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.787 [2024-07-14 10:15:16.195284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:45.787 [2024-07-14 10:15:16.204787] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190ed920 00:22:45.787 [2024-07-14 10:15:16.205147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.787 [2024-07-14 10:15:16.205189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:45.787 [2024-07-14 10:15:16.214703] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f6020 00:22:45.787 [2024-07-14 10:15:16.215092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.787 [2024-07-14 10:15:16.215118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:45.787 [2024-07-14 10:15:16.225012] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f2948 00:22:45.787 [2024-07-14 10:15:16.226207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.787 [2024-07-14 10:15:16.226263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:45.787 [2024-07-14 10:15:16.234480] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190f3a28 00:22:45.787 [2024-07-14 10:15:16.234645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.787 [2024-07-14 10:15:16.234665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:45.787 [2024-07-14 10:15:16.244187] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226620) with pdu=0x2000190ea248 00:22:45.787 [2024-07-14 10:15:16.244598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:45.787 [2024-07-14 10:15:16.244625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:45.787 00:22:45.787 Latency(us) 00:22:45.787 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:45.787 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:45.787 nvme0n1 : 2.00 26382.13 103.06 0.00 0.00 4846.82 1839.48 13226.36 00:22:45.787 =================================================================================================================== 00:22:45.787 Total : 26382.13 103.06 0.00 0.00 4846.82 1839.48 13226.36 00:22:45.787 0 00:22:45.787 10:15:16 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:45.787 10:15:16 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:45.787 10:15:16 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:45.787 10:15:16 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:45.787 | .driver_specific 00:22:45.787 | .nvme_error 00:22:45.787 | .status_code 00:22:45.787 | .command_transient_transport_error' 00:22:46.044 10:15:16 -- host/digest.sh@71 -- # (( 207 > 0 )) 00:22:46.044 10:15:16 -- host/digest.sh@73 -- # killprocess 97199 00:22:46.044 10:15:16 -- common/autotest_common.sh@926 -- # '[' -z 97199 ']' 00:22:46.044 10:15:16 -- common/autotest_common.sh@930 -- # kill -0 97199 00:22:46.044 10:15:16 -- common/autotest_common.sh@931 -- # uname 00:22:46.044 10:15:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:46.044 10:15:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97199 00:22:46.044 10:15:16 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:46.044 10:15:16 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:46.044 killing process with pid 97199 00:22:46.044 10:15:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97199' 00:22:46.044 Received shutdown signal, test time was about 2.000000 seconds 00:22:46.044 00:22:46.044 Latency(us) 00:22:46.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.044 =================================================================================================================== 00:22:46.044 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:46.044 10:15:16 -- common/autotest_common.sh@945 -- # kill 97199 00:22:46.044 10:15:16 -- common/autotest_common.sh@950 -- # wait 97199 00:22:46.301 10:15:16 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:22:46.301 10:15:16 -- host/digest.sh@54 -- # local rw bs qd 00:22:46.301 10:15:16 -- host/digest.sh@56 -- # rw=randwrite 00:22:46.301 10:15:16 -- host/digest.sh@56 -- # bs=131072 00:22:46.301 10:15:16 -- host/digest.sh@56 -- # qd=16 00:22:46.301 10:15:16 -- host/digest.sh@58 -- # bperfpid=97285 00:22:46.301 10:15:16 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:22:46.301 10:15:16 -- host/digest.sh@60 -- # waitforlisten 97285 /var/tmp/bperf.sock 00:22:46.301 10:15:16 -- common/autotest_common.sh@819 -- # '[' -z 97285 ']' 00:22:46.301 10:15:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:22:46.301 10:15:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:46.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:22:46.301 10:15:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:22:46.301 10:15:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:46.301 10:15:16 -- common/autotest_common.sh@10 -- # set +x 00:22:46.301 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:46.301 Zero copy mechanism will not be used. 00:22:46.301 [2024-07-14 10:15:16.793194] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:46.301 [2024-07-14 10:15:16.793297] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97285 ] 00:22:46.557 [2024-07-14 10:15:16.928873] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.557 [2024-07-14 10:15:16.989170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:47.489 10:15:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:47.489 10:15:17 -- common/autotest_common.sh@852 -- # return 0 00:22:47.489 10:15:17 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:47.489 10:15:17 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:22:47.489 10:15:17 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:22:47.489 10:15:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:47.489 10:15:17 -- common/autotest_common.sh@10 -- # set +x 00:22:47.489 10:15:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:47.489 10:15:18 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:47.489 10:15:18 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:22:47.748 nvme0n1 00:22:47.748 10:15:18 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:22:47.748 10:15:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:47.748 10:15:18 -- common/autotest_common.sh@10 -- # set +x 00:22:47.748 10:15:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:47.748 10:15:18 -- host/digest.sh@69 -- # bperf_py perform_tests 00:22:47.748 10:15:18 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:22:48.010 I/O size of 131072 is greater than zero copy threshold (65536). 00:22:48.010 Zero copy mechanism will not be used. 00:22:48.010 Running I/O for 2 seconds... 00:22:48.010 [2024-07-14 10:15:18.365469] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.010 [2024-07-14 10:15:18.365727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.010 [2024-07-14 10:15:18.365755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.010 [2024-07-14 10:15:18.369349] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.010 [2024-07-14 10:15:18.369498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.010 [2024-07-14 10:15:18.369519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.010 [2024-07-14 10:15:18.373125] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.010 [2024-07-14 10:15:18.373257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.010 [2024-07-14 10:15:18.373279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.010 [2024-07-14 10:15:18.377024] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.010 [2024-07-14 10:15:18.377137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.010 [2024-07-14 10:15:18.377158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.010 [2024-07-14 10:15:18.380848] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.010 [2024-07-14 10:15:18.380970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.010 [2024-07-14 10:15:18.380991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.010 [2024-07-14 10:15:18.384677] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.010 [2024-07-14 10:15:18.384767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.010 [2024-07-14 10:15:18.384787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.010 [2024-07-14 10:15:18.388716] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.010 [2024-07-14 10:15:18.388842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.010 [2024-07-14 10:15:18.388864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.010 [2024-07-14 10:15:18.392832] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.010 [2024-07-14 10:15:18.393035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.010 [2024-07-14 10:15:18.393056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.010 [2024-07-14 10:15:18.396831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.010 [2024-07-14 10:15:18.397037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.010 [2024-07-14 10:15:18.397063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.010 [2024-07-14 10:15:18.400917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.010 [2024-07-14 10:15:18.401054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.010 [2024-07-14 10:15:18.401076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.010 [2024-07-14 10:15:18.404865] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.010 [2024-07-14 10:15:18.404971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.010 [2024-07-14 10:15:18.404992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.010 [2024-07-14 10:15:18.408961] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.010 [2024-07-14 10:15:18.409055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.010 [2024-07-14 10:15:18.409077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.010 [2024-07-14 10:15:18.412948] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.010 [2024-07-14 10:15:18.413042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.010 [2024-07-14 10:15:18.413064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.010 [2024-07-14 10:15:18.417056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.010 [2024-07-14 10:15:18.417179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.010 [2024-07-14 10:15:18.417200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.010 [2024-07-14 10:15:18.421109] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.010 [2024-07-14 10:15:18.421247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.010 [2024-07-14 10:15:18.421270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.010 [2024-07-14 10:15:18.425144] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.010 [2024-07-14 10:15:18.425351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.010 [2024-07-14 10:15:18.425378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.010 [2024-07-14 10:15:18.428909] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.010 [2024-07-14 10:15:18.429113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.010 [2024-07-14 10:15:18.429135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.010 [2024-07-14 10:15:18.432827] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.010 [2024-07-14 10:15:18.432949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.010 [2024-07-14 10:15:18.432970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.010 [2024-07-14 10:15:18.436755] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.010 [2024-07-14 10:15:18.436863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.010 [2024-07-14 10:15:18.436882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.010 [2024-07-14 10:15:18.440680] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.010 [2024-07-14 10:15:18.440776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.010 [2024-07-14 10:15:18.440796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.010 [2024-07-14 10:15:18.444526] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.010 [2024-07-14 10:15:18.444601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.010 [2024-07-14 10:15:18.444622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.010 [2024-07-14 10:15:18.448430] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.010 [2024-07-14 10:15:18.448564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.010 [2024-07-14 10:15:18.448586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.010 [2024-07-14 10:15:18.452366] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.010 [2024-07-14 10:15:18.452495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.010 [2024-07-14 10:15:18.452531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.010 [2024-07-14 10:15:18.456445] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.010 [2024-07-14 10:15:18.456647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.010 [2024-07-14 10:15:18.456673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.010 [2024-07-14 10:15:18.460289] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.011 [2024-07-14 10:15:18.460493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.011 [2024-07-14 10:15:18.460530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.011 [2024-07-14 10:15:18.464041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.011 [2024-07-14 10:15:18.464180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.011 [2024-07-14 10:15:18.464201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.011 [2024-07-14 10:15:18.467907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.011 [2024-07-14 10:15:18.468015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.011 [2024-07-14 10:15:18.468037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.011 [2024-07-14 10:15:18.471780] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.011 [2024-07-14 10:15:18.471890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.011 [2024-07-14 10:15:18.471911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.011 [2024-07-14 10:15:18.475563] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.011 [2024-07-14 10:15:18.475670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.011 [2024-07-14 10:15:18.475690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.011 [2024-07-14 10:15:18.479403] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.011 [2024-07-14 10:15:18.479508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.011 [2024-07-14 10:15:18.479529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.011 [2024-07-14 10:15:18.483183] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.011 [2024-07-14 10:15:18.483340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.011 [2024-07-14 10:15:18.483361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.011 [2024-07-14 10:15:18.487099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.011 [2024-07-14 10:15:18.487329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.011 [2024-07-14 10:15:18.487350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.011 [2024-07-14 10:15:18.491011] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.011 [2024-07-14 10:15:18.491199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.011 [2024-07-14 10:15:18.491220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.011 [2024-07-14 10:15:18.494732] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.011 [2024-07-14 10:15:18.494896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.011 [2024-07-14 10:15:18.494917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.011 [2024-07-14 10:15:18.498500] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.011 [2024-07-14 10:15:18.498601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.011 [2024-07-14 10:15:18.498621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.011 [2024-07-14 10:15:18.502237] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.011 [2024-07-14 10:15:18.502345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.011 [2024-07-14 10:15:18.502366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.011 [2024-07-14 10:15:18.506001] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.011 [2024-07-14 10:15:18.506111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.011 [2024-07-14 10:15:18.506131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.011 [2024-07-14 10:15:18.509841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.011 [2024-07-14 10:15:18.509961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.011 [2024-07-14 10:15:18.509982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.011 [2024-07-14 10:15:18.513773] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.011 [2024-07-14 10:15:18.513896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.011 [2024-07-14 10:15:18.513916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.011 [2024-07-14 10:15:18.517579] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.011 [2024-07-14 10:15:18.517773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.011 [2024-07-14 10:15:18.517793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.011 [2024-07-14 10:15:18.521405] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.011 [2024-07-14 10:15:18.521601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.011 [2024-07-14 10:15:18.521623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.011 [2024-07-14 10:15:18.525355] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.011 [2024-07-14 10:15:18.525465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.011 [2024-07-14 10:15:18.525485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.011 [2024-07-14 10:15:18.529206] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.011 [2024-07-14 10:15:18.529329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.011 [2024-07-14 10:15:18.529350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.011 [2024-07-14 10:15:18.532941] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.011 [2024-07-14 10:15:18.533049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.011 [2024-07-14 10:15:18.533070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.011 [2024-07-14 10:15:18.536860] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.011 [2024-07-14 10:15:18.536951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.011 [2024-07-14 10:15:18.536972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.011 [2024-07-14 10:15:18.540762] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.011 [2024-07-14 10:15:18.540882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.011 [2024-07-14 10:15:18.540903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.011 [2024-07-14 10:15:18.544561] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.011 [2024-07-14 10:15:18.544668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.011 [2024-07-14 10:15:18.544689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.011 [2024-07-14 10:15:18.548340] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.011 [2024-07-14 10:15:18.548528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.011 [2024-07-14 10:15:18.548581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.011 [2024-07-14 10:15:18.552083] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.011 [2024-07-14 10:15:18.552336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.011 [2024-07-14 10:15:18.552364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.273 [2024-07-14 10:15:18.556027] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.273 [2024-07-14 10:15:18.556158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.273 [2024-07-14 10:15:18.556178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.273 [2024-07-14 10:15:18.559881] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.273 [2024-07-14 10:15:18.559993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.273 [2024-07-14 10:15:18.560015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.273 [2024-07-14 10:15:18.563859] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.273 [2024-07-14 10:15:18.563976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.273 [2024-07-14 10:15:18.563997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.273 [2024-07-14 10:15:18.568072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.273 [2024-07-14 10:15:18.568167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.273 [2024-07-14 10:15:18.568188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.273 [2024-07-14 10:15:18.572495] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.273 [2024-07-14 10:15:18.572644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.273 [2024-07-14 10:15:18.572666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.273 [2024-07-14 10:15:18.576923] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.273 [2024-07-14 10:15:18.577054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.273 [2024-07-14 10:15:18.577076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.273 [2024-07-14 10:15:18.581507] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.273 [2024-07-14 10:15:18.581754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.273 [2024-07-14 10:15:18.581799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.273 [2024-07-14 10:15:18.586096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.273 [2024-07-14 10:15:18.586399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.273 [2024-07-14 10:15:18.586423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.273 [2024-07-14 10:15:18.590525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.273 [2024-07-14 10:15:18.590698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.273 [2024-07-14 10:15:18.590719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.273 [2024-07-14 10:15:18.594986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.273 [2024-07-14 10:15:18.595093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.273 [2024-07-14 10:15:18.595146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.273 [2024-07-14 10:15:18.599341] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.273 [2024-07-14 10:15:18.599442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.273 [2024-07-14 10:15:18.599464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.273 [2024-07-14 10:15:18.603680] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.273 [2024-07-14 10:15:18.603791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.273 [2024-07-14 10:15:18.603812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.273 [2024-07-14 10:15:18.607919] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.273 [2024-07-14 10:15:18.608041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.273 [2024-07-14 10:15:18.608060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.273 [2024-07-14 10:15:18.612275] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.273 [2024-07-14 10:15:18.612438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.273 [2024-07-14 10:15:18.612459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.273 [2024-07-14 10:15:18.616329] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.273 [2024-07-14 10:15:18.616529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.273 [2024-07-14 10:15:18.616550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.273 [2024-07-14 10:15:18.620229] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.273 [2024-07-14 10:15:18.620423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.273 [2024-07-14 10:15:18.620443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.273 [2024-07-14 10:15:18.624186] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.273 [2024-07-14 10:15:18.624358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.273 [2024-07-14 10:15:18.624379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.273 [2024-07-14 10:15:18.628307] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.273 [2024-07-14 10:15:18.628418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.273 [2024-07-14 10:15:18.628439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.273 [2024-07-14 10:15:18.632386] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.273 [2024-07-14 10:15:18.632501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.273 [2024-07-14 10:15:18.632521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.273 [2024-07-14 10:15:18.636368] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.273 [2024-07-14 10:15:18.636461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.273 [2024-07-14 10:15:18.636481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.273 [2024-07-14 10:15:18.640246] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.273 [2024-07-14 10:15:18.640366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.273 [2024-07-14 10:15:18.640386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.273 [2024-07-14 10:15:18.644220] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.273 [2024-07-14 10:15:18.644378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.273 [2024-07-14 10:15:18.644399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.273 [2024-07-14 10:15:18.648137] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.273 [2024-07-14 10:15:18.648352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.273 [2024-07-14 10:15:18.648373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.273 [2024-07-14 10:15:18.652240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.273 [2024-07-14 10:15:18.652453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.273 [2024-07-14 10:15:18.652480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.273 [2024-07-14 10:15:18.656185] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.273 [2024-07-14 10:15:18.656293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.273 [2024-07-14 10:15:18.656314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.273 [2024-07-14 10:15:18.660030] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.273 [2024-07-14 10:15:18.660134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.273 [2024-07-14 10:15:18.660154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.273 [2024-07-14 10:15:18.664083] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.273 [2024-07-14 10:15:18.664213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.273 [2024-07-14 10:15:18.664234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.273 [2024-07-14 10:15:18.668141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.273 [2024-07-14 10:15:18.668257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.273 [2024-07-14 10:15:18.668279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.273 [2024-07-14 10:15:18.672097] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.273 [2024-07-14 10:15:18.672219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.274 [2024-07-14 10:15:18.672254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.274 [2024-07-14 10:15:18.676142] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.274 [2024-07-14 10:15:18.676278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.274 [2024-07-14 10:15:18.676299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.274 [2024-07-14 10:15:18.680000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.274 [2024-07-14 10:15:18.680200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.274 [2024-07-14 10:15:18.680221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.274 [2024-07-14 10:15:18.683982] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.274 [2024-07-14 10:15:18.684164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.274 [2024-07-14 10:15:18.684185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.274 [2024-07-14 10:15:18.688183] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.274 [2024-07-14 10:15:18.688350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.274 [2024-07-14 10:15:18.688372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.274 [2024-07-14 10:15:18.692050] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.274 [2024-07-14 10:15:18.692155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.274 [2024-07-14 10:15:18.692176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.274 [2024-07-14 10:15:18.695944] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.274 [2024-07-14 10:15:18.696057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.274 [2024-07-14 10:15:18.696077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.274 [2024-07-14 10:15:18.699837] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.274 [2024-07-14 10:15:18.699947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.274 [2024-07-14 10:15:18.699968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.274 [2024-07-14 10:15:18.703846] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.274 [2024-07-14 10:15:18.703999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.274 [2024-07-14 10:15:18.704021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.274 [2024-07-14 10:15:18.707953] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.274 [2024-07-14 10:15:18.708092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.274 [2024-07-14 10:15:18.708113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.274 [2024-07-14 10:15:18.712069] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.274 [2024-07-14 10:15:18.712288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.274 [2024-07-14 10:15:18.712309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.274 [2024-07-14 10:15:18.716069] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.274 [2024-07-14 10:15:18.716277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.274 [2024-07-14 10:15:18.716297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.274 [2024-07-14 10:15:18.719987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.274 [2024-07-14 10:15:18.720131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.274 [2024-07-14 10:15:18.720152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.274 [2024-07-14 10:15:18.724176] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.274 [2024-07-14 10:15:18.724314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.274 [2024-07-14 10:15:18.724335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.274 [2024-07-14 10:15:18.728085] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.274 [2024-07-14 10:15:18.728182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.274 [2024-07-14 10:15:18.728202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.274 [2024-07-14 10:15:18.732007] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.274 [2024-07-14 10:15:18.732100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.274 [2024-07-14 10:15:18.732120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.274 [2024-07-14 10:15:18.735965] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.274 [2024-07-14 10:15:18.736084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.274 [2024-07-14 10:15:18.736104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.274 [2024-07-14 10:15:18.739903] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.274 [2024-07-14 10:15:18.740008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.274 [2024-07-14 10:15:18.740028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.274 [2024-07-14 10:15:18.744041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.274 [2024-07-14 10:15:18.744276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.274 [2024-07-14 10:15:18.744298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.274 [2024-07-14 10:15:18.748052] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.274 [2024-07-14 10:15:18.748200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.274 [2024-07-14 10:15:18.748221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.274 [2024-07-14 10:15:18.751976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.274 [2024-07-14 10:15:18.752099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.274 [2024-07-14 10:15:18.752120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.274 [2024-07-14 10:15:18.755944] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.274 [2024-07-14 10:15:18.756049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.274 [2024-07-14 10:15:18.756069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.274 [2024-07-14 10:15:18.760008] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.274 [2024-07-14 10:15:18.760099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.274 [2024-07-14 10:15:18.760119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.274 [2024-07-14 10:15:18.763903] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.274 [2024-07-14 10:15:18.763996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.274 [2024-07-14 10:15:18.764016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.274 [2024-07-14 10:15:18.767926] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.274 [2024-07-14 10:15:18.768071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.274 [2024-07-14 10:15:18.768091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.274 [2024-07-14 10:15:18.771869] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.274 [2024-07-14 10:15:18.771999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.274 [2024-07-14 10:15:18.772019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.274 [2024-07-14 10:15:18.776172] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.274 [2024-07-14 10:15:18.776391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.274 [2024-07-14 10:15:18.776412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.274 [2024-07-14 10:15:18.780103] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.274 [2024-07-14 10:15:18.780372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.274 [2024-07-14 10:15:18.780399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.274 [2024-07-14 10:15:18.783971] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.274 [2024-07-14 10:15:18.784071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.274 [2024-07-14 10:15:18.784091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.274 [2024-07-14 10:15:18.787979] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.274 [2024-07-14 10:15:18.788083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.275 [2024-07-14 10:15:18.788103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.275 [2024-07-14 10:15:18.791798] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.275 [2024-07-14 10:15:18.791899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.275 [2024-07-14 10:15:18.791918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.275 [2024-07-14 10:15:18.795634] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.275 [2024-07-14 10:15:18.795734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.275 [2024-07-14 10:15:18.795753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.275 [2024-07-14 10:15:18.799527] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.275 [2024-07-14 10:15:18.799687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.275 [2024-07-14 10:15:18.799707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.275 [2024-07-14 10:15:18.803397] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.275 [2024-07-14 10:15:18.803556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.275 [2024-07-14 10:15:18.803576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.275 [2024-07-14 10:15:18.807232] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.275 [2024-07-14 10:15:18.807446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.275 [2024-07-14 10:15:18.807466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.275 [2024-07-14 10:15:18.810982] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.275 [2024-07-14 10:15:18.811228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.275 [2024-07-14 10:15:18.811265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.275 [2024-07-14 10:15:18.814702] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.275 [2024-07-14 10:15:18.814834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.275 [2024-07-14 10:15:18.814855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.536 [2024-07-14 10:15:18.818642] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.536 [2024-07-14 10:15:18.818738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.536 [2024-07-14 10:15:18.818757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.536 [2024-07-14 10:15:18.822446] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.536 [2024-07-14 10:15:18.822535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.536 [2024-07-14 10:15:18.822556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.536 [2024-07-14 10:15:18.826102] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.536 [2024-07-14 10:15:18.826191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.536 [2024-07-14 10:15:18.826210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.536 [2024-07-14 10:15:18.829907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.536 [2024-07-14 10:15:18.830059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.536 [2024-07-14 10:15:18.830079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.536 [2024-07-14 10:15:18.833690] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.536 [2024-07-14 10:15:18.833848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.536 [2024-07-14 10:15:18.833868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.536 [2024-07-14 10:15:18.837615] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.536 [2024-07-14 10:15:18.837810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.536 [2024-07-14 10:15:18.837830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.536 [2024-07-14 10:15:18.841459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.537 [2024-07-14 10:15:18.841671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.537 [2024-07-14 10:15:18.841690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.537 [2024-07-14 10:15:18.845256] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.537 [2024-07-14 10:15:18.845367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.537 [2024-07-14 10:15:18.845386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.537 [2024-07-14 10:15:18.848997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.537 [2024-07-14 10:15:18.849106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.537 [2024-07-14 10:15:18.849126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.537 [2024-07-14 10:15:18.852790] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.537 [2024-07-14 10:15:18.852888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.537 [2024-07-14 10:15:18.852908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.537 [2024-07-14 10:15:18.856580] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.537 [2024-07-14 10:15:18.856712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.537 [2024-07-14 10:15:18.856732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.537 [2024-07-14 10:15:18.860388] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.537 [2024-07-14 10:15:18.860523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.537 [2024-07-14 10:15:18.860544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.537 [2024-07-14 10:15:18.864035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.537 [2024-07-14 10:15:18.864178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.537 [2024-07-14 10:15:18.864198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.537 [2024-07-14 10:15:18.868052] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.537 [2024-07-14 10:15:18.868263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.537 [2024-07-14 10:15:18.868285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.537 [2024-07-14 10:15:18.871781] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.537 [2024-07-14 10:15:18.872013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.537 [2024-07-14 10:15:18.872048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.537 [2024-07-14 10:15:18.875695] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.537 [2024-07-14 10:15:18.875786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.537 [2024-07-14 10:15:18.875807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.537 [2024-07-14 10:15:18.879743] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.537 [2024-07-14 10:15:18.879837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.537 [2024-07-14 10:15:18.879858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.537 [2024-07-14 10:15:18.883682] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.537 [2024-07-14 10:15:18.883791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.537 [2024-07-14 10:15:18.883812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.537 [2024-07-14 10:15:18.887667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.537 [2024-07-14 10:15:18.887757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.537 [2024-07-14 10:15:18.887778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.537 [2024-07-14 10:15:18.891631] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.537 [2024-07-14 10:15:18.891787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.537 [2024-07-14 10:15:18.891807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.537 [2024-07-14 10:15:18.895577] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.537 [2024-07-14 10:15:18.895730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.537 [2024-07-14 10:15:18.895751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.537 [2024-07-14 10:15:18.899415] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.537 [2024-07-14 10:15:18.899612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.537 [2024-07-14 10:15:18.899632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.537 [2024-07-14 10:15:18.903368] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.537 [2024-07-14 10:15:18.903611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.537 [2024-07-14 10:15:18.903639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.537 [2024-07-14 10:15:18.907235] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.537 [2024-07-14 10:15:18.907410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.537 [2024-07-14 10:15:18.907431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.537 [2024-07-14 10:15:18.910990] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.537 [2024-07-14 10:15:18.911095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.537 [2024-07-14 10:15:18.911115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.537 [2024-07-14 10:15:18.914877] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.537 [2024-07-14 10:15:18.914956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.537 [2024-07-14 10:15:18.914977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.537 [2024-07-14 10:15:18.918540] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.537 [2024-07-14 10:15:18.918616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.537 [2024-07-14 10:15:18.918636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.537 [2024-07-14 10:15:18.922364] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.537 [2024-07-14 10:15:18.922495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.537 [2024-07-14 10:15:18.922515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.537 [2024-07-14 10:15:18.926091] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.537 [2024-07-14 10:15:18.926245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.537 [2024-07-14 10:15:18.926266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.537 [2024-07-14 10:15:18.929988] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.537 [2024-07-14 10:15:18.930200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.537 [2024-07-14 10:15:18.930233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.537 [2024-07-14 10:15:18.933843] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.537 [2024-07-14 10:15:18.934024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.537 [2024-07-14 10:15:18.934045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.537 [2024-07-14 10:15:18.937625] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.537 [2024-07-14 10:15:18.937795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.537 [2024-07-14 10:15:18.937816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.537 [2024-07-14 10:15:18.941402] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.537 [2024-07-14 10:15:18.941512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.537 [2024-07-14 10:15:18.941548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.537 [2024-07-14 10:15:18.945950] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.537 [2024-07-14 10:15:18.946056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.537 [2024-07-14 10:15:18.946078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.537 [2024-07-14 10:15:18.950817] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.537 [2024-07-14 10:15:18.950953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.538 [2024-07-14 10:15:18.950978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.538 [2024-07-14 10:15:18.955705] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.538 [2024-07-14 10:15:18.955906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.538 [2024-07-14 10:15:18.955931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.538 [2024-07-14 10:15:18.960550] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.538 [2024-07-14 10:15:18.960729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.538 [2024-07-14 10:15:18.960760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.538 [2024-07-14 10:15:18.965505] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.538 [2024-07-14 10:15:18.965748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.538 [2024-07-14 10:15:18.965791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.538 [2024-07-14 10:15:18.970079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.538 [2024-07-14 10:15:18.970379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.538 [2024-07-14 10:15:18.970427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.538 [2024-07-14 10:15:18.974865] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.538 [2024-07-14 10:15:18.975046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.538 [2024-07-14 10:15:18.975077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.538 [2024-07-14 10:15:18.979665] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.538 [2024-07-14 10:15:18.979761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.538 [2024-07-14 10:15:18.979785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.538 [2024-07-14 10:15:18.984240] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.538 [2024-07-14 10:15:18.984419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.538 [2024-07-14 10:15:18.984442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.538 [2024-07-14 10:15:18.988196] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.538 [2024-07-14 10:15:18.988304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.538 [2024-07-14 10:15:18.988324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.538 [2024-07-14 10:15:18.992012] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.538 [2024-07-14 10:15:18.992153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.538 [2024-07-14 10:15:18.992174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.538 [2024-07-14 10:15:18.995830] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.538 [2024-07-14 10:15:18.995983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.538 [2024-07-14 10:15:18.996003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.538 [2024-07-14 10:15:18.999756] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.538 [2024-07-14 10:15:18.999950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.538 [2024-07-14 10:15:19.000003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.538 [2024-07-14 10:15:19.003551] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.538 [2024-07-14 10:15:19.003749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.538 [2024-07-14 10:15:19.003768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.538 [2024-07-14 10:15:19.007359] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.538 [2024-07-14 10:15:19.007534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.538 [2024-07-14 10:15:19.007555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.538 [2024-07-14 10:15:19.011154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.538 [2024-07-14 10:15:19.011312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.538 [2024-07-14 10:15:19.011332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.538 [2024-07-14 10:15:19.014994] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.538 [2024-07-14 10:15:19.015091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.538 [2024-07-14 10:15:19.015127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.538 [2024-07-14 10:15:19.018743] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.538 [2024-07-14 10:15:19.018859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.538 [2024-07-14 10:15:19.018878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.538 [2024-07-14 10:15:19.022460] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.538 [2024-07-14 10:15:19.022601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.538 [2024-07-14 10:15:19.022621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.538 [2024-07-14 10:15:19.026106] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.538 [2024-07-14 10:15:19.026242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.538 [2024-07-14 10:15:19.026262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.538 [2024-07-14 10:15:19.029925] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.538 [2024-07-14 10:15:19.030117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.538 [2024-07-14 10:15:19.030137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.538 [2024-07-14 10:15:19.033715] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.538 [2024-07-14 10:15:19.033895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.538 [2024-07-14 10:15:19.033915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.538 [2024-07-14 10:15:19.037491] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.538 [2024-07-14 10:15:19.037688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.538 [2024-07-14 10:15:19.037708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.538 [2024-07-14 10:15:19.041342] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.538 [2024-07-14 10:15:19.041441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.538 [2024-07-14 10:15:19.041462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.538 [2024-07-14 10:15:19.045114] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.538 [2024-07-14 10:15:19.045202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.538 [2024-07-14 10:15:19.045222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.538 [2024-07-14 10:15:19.048887] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.538 [2024-07-14 10:15:19.048978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.538 [2024-07-14 10:15:19.048998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.538 [2024-07-14 10:15:19.052725] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.538 [2024-07-14 10:15:19.052866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.538 [2024-07-14 10:15:19.052886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.538 [2024-07-14 10:15:19.056484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.538 [2024-07-14 10:15:19.056632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.538 [2024-07-14 10:15:19.056653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.538 [2024-07-14 10:15:19.060180] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.538 [2024-07-14 10:15:19.060396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.538 [2024-07-14 10:15:19.060416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.538 [2024-07-14 10:15:19.063972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.538 [2024-07-14 10:15:19.064182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.538 [2024-07-14 10:15:19.064202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.538 [2024-07-14 10:15:19.067815] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.538 [2024-07-14 10:15:19.067980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.539 [2024-07-14 10:15:19.068000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.539 [2024-07-14 10:15:19.071840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.539 [2024-07-14 10:15:19.071932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.539 [2024-07-14 10:15:19.071953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.539 [2024-07-14 10:15:19.075753] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.539 [2024-07-14 10:15:19.075855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.539 [2024-07-14 10:15:19.075875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.539 [2024-07-14 10:15:19.079502] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.539 [2024-07-14 10:15:19.079593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.539 [2024-07-14 10:15:19.079614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.799 [2024-07-14 10:15:19.083263] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.799 [2024-07-14 10:15:19.083409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.799 [2024-07-14 10:15:19.083430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.799 [2024-07-14 10:15:19.087026] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.799 [2024-07-14 10:15:19.087176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.799 [2024-07-14 10:15:19.087195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.799 [2024-07-14 10:15:19.090956] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.799 [2024-07-14 10:15:19.091163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.799 [2024-07-14 10:15:19.091189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.799 [2024-07-14 10:15:19.094740] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.799 [2024-07-14 10:15:19.095018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.800 [2024-07-14 10:15:19.095045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.800 [2024-07-14 10:15:19.098438] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.800 [2024-07-14 10:15:19.098525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.800 [2024-07-14 10:15:19.098545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.800 [2024-07-14 10:15:19.102118] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.800 [2024-07-14 10:15:19.102229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.800 [2024-07-14 10:15:19.102250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.800 [2024-07-14 10:15:19.105858] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.800 [2024-07-14 10:15:19.105954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.800 [2024-07-14 10:15:19.105974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.800 [2024-07-14 10:15:19.109641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.800 [2024-07-14 10:15:19.109730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.800 [2024-07-14 10:15:19.109750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.800 [2024-07-14 10:15:19.113517] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.800 [2024-07-14 10:15:19.113676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.800 [2024-07-14 10:15:19.113697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.800 [2024-07-14 10:15:19.117296] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.800 [2024-07-14 10:15:19.117457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.800 [2024-07-14 10:15:19.117477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.800 [2024-07-14 10:15:19.121136] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.800 [2024-07-14 10:15:19.121345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.800 [2024-07-14 10:15:19.121366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.800 [2024-07-14 10:15:19.124919] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.800 [2024-07-14 10:15:19.125119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.800 [2024-07-14 10:15:19.125138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.800 [2024-07-14 10:15:19.128738] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.800 [2024-07-14 10:15:19.128908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.800 [2024-07-14 10:15:19.128929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.800 [2024-07-14 10:15:19.132501] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.800 [2024-07-14 10:15:19.132578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.800 [2024-07-14 10:15:19.132598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.800 [2024-07-14 10:15:19.136260] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.800 [2024-07-14 10:15:19.136357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.800 [2024-07-14 10:15:19.136377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.800 [2024-07-14 10:15:19.139917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.800 [2024-07-14 10:15:19.140008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.800 [2024-07-14 10:15:19.140028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.800 [2024-07-14 10:15:19.143856] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.800 [2024-07-14 10:15:19.144013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.800 [2024-07-14 10:15:19.144035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.800 [2024-07-14 10:15:19.147714] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.800 [2024-07-14 10:15:19.147880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.800 [2024-07-14 10:15:19.147900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.800 [2024-07-14 10:15:19.151593] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.800 [2024-07-14 10:15:19.151789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.800 [2024-07-14 10:15:19.151810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.800 [2024-07-14 10:15:19.155380] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.800 [2024-07-14 10:15:19.155573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.800 [2024-07-14 10:15:19.155594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.800 [2024-07-14 10:15:19.159093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.800 [2024-07-14 10:15:19.159264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.800 [2024-07-14 10:15:19.159285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.800 [2024-07-14 10:15:19.162831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.800 [2024-07-14 10:15:19.162931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.800 [2024-07-14 10:15:19.162953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.800 [2024-07-14 10:15:19.166709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.800 [2024-07-14 10:15:19.166845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.800 [2024-07-14 10:15:19.166866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.800 [2024-07-14 10:15:19.170485] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.800 [2024-07-14 10:15:19.170579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.800 [2024-07-14 10:15:19.170599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.800 [2024-07-14 10:15:19.174323] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.800 [2024-07-14 10:15:19.174455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.800 [2024-07-14 10:15:19.174476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.800 [2024-07-14 10:15:19.178048] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.800 [2024-07-14 10:15:19.178172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.800 [2024-07-14 10:15:19.178192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.800 [2024-07-14 10:15:19.181859] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.800 [2024-07-14 10:15:19.182048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.800 [2024-07-14 10:15:19.182069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.800 [2024-07-14 10:15:19.185631] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.800 [2024-07-14 10:15:19.185847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.800 [2024-07-14 10:15:19.185867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.800 [2024-07-14 10:15:19.189462] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.800 [2024-07-14 10:15:19.189651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.800 [2024-07-14 10:15:19.189672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.800 [2024-07-14 10:15:19.193324] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.800 [2024-07-14 10:15:19.193420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.800 [2024-07-14 10:15:19.193441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.800 [2024-07-14 10:15:19.197152] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.800 [2024-07-14 10:15:19.197294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.800 [2024-07-14 10:15:19.197315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.800 [2024-07-14 10:15:19.200889] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.800 [2024-07-14 10:15:19.200979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.800 [2024-07-14 10:15:19.200999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.800 [2024-07-14 10:15:19.204774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.800 [2024-07-14 10:15:19.204914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.801 [2024-07-14 10:15:19.204935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.801 [2024-07-14 10:15:19.208584] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.801 [2024-07-14 10:15:19.208754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.801 [2024-07-14 10:15:19.208774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.801 [2024-07-14 10:15:19.212455] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.801 [2024-07-14 10:15:19.212635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.801 [2024-07-14 10:15:19.212655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.801 [2024-07-14 10:15:19.216231] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.801 [2024-07-14 10:15:19.216438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.801 [2024-07-14 10:15:19.216458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.801 [2024-07-14 10:15:19.220064] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.801 [2024-07-14 10:15:19.220288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.801 [2024-07-14 10:15:19.220310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.801 [2024-07-14 10:15:19.223940] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.801 [2024-07-14 10:15:19.224034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.801 [2024-07-14 10:15:19.224055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.801 [2024-07-14 10:15:19.227838] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.801 [2024-07-14 10:15:19.227937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.801 [2024-07-14 10:15:19.227957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.801 [2024-07-14 10:15:19.231751] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.801 [2024-07-14 10:15:19.231846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.801 [2024-07-14 10:15:19.231866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.801 [2024-07-14 10:15:19.235691] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.801 [2024-07-14 10:15:19.235838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.801 [2024-07-14 10:15:19.235858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.801 [2024-07-14 10:15:19.239494] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.801 [2024-07-14 10:15:19.239675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.801 [2024-07-14 10:15:19.239695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.801 [2024-07-14 10:15:19.243425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.801 [2024-07-14 10:15:19.243620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.801 [2024-07-14 10:15:19.243641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.801 [2024-07-14 10:15:19.247301] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.801 [2024-07-14 10:15:19.247479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.801 [2024-07-14 10:15:19.247531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.801 [2024-07-14 10:15:19.251036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.801 [2024-07-14 10:15:19.251186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.801 [2024-07-14 10:15:19.251207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.801 [2024-07-14 10:15:19.254861] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.801 [2024-07-14 10:15:19.254938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.801 [2024-07-14 10:15:19.254959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.801 [2024-07-14 10:15:19.258499] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.801 [2024-07-14 10:15:19.258584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.801 [2024-07-14 10:15:19.258604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.801 [2024-07-14 10:15:19.262104] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.801 [2024-07-14 10:15:19.262229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.801 [2024-07-14 10:15:19.262248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.801 [2024-07-14 10:15:19.265934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.801 [2024-07-14 10:15:19.266076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.801 [2024-07-14 10:15:19.266097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.801 [2024-07-14 10:15:19.269715] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.801 [2024-07-14 10:15:19.269857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.801 [2024-07-14 10:15:19.269876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.801 [2024-07-14 10:15:19.273562] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.801 [2024-07-14 10:15:19.273755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.801 [2024-07-14 10:15:19.273775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.801 [2024-07-14 10:15:19.277460] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.801 [2024-07-14 10:15:19.277740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.801 [2024-07-14 10:15:19.277792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.801 [2024-07-14 10:15:19.281315] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.801 [2024-07-14 10:15:19.281424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.801 [2024-07-14 10:15:19.281444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.801 [2024-07-14 10:15:19.285154] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.801 [2024-07-14 10:15:19.285254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.801 [2024-07-14 10:15:19.285286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.801 [2024-07-14 10:15:19.289053] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.801 [2024-07-14 10:15:19.289143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.801 [2024-07-14 10:15:19.289163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.801 [2024-07-14 10:15:19.292795] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.801 [2024-07-14 10:15:19.292891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.801 [2024-07-14 10:15:19.292910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.801 [2024-07-14 10:15:19.296553] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.801 [2024-07-14 10:15:19.296685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.801 [2024-07-14 10:15:19.296706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.801 [2024-07-14 10:15:19.300249] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.801 [2024-07-14 10:15:19.300349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.801 [2024-07-14 10:15:19.300369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.801 [2024-07-14 10:15:19.304088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.801 [2024-07-14 10:15:19.304312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.801 [2024-07-14 10:15:19.304333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.801 [2024-07-14 10:15:19.307858] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.801 [2024-07-14 10:15:19.308096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.801 [2024-07-14 10:15:19.308148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.801 [2024-07-14 10:15:19.311733] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.801 [2024-07-14 10:15:19.311831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.801 [2024-07-14 10:15:19.311851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.801 [2024-07-14 10:15:19.315646] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.801 [2024-07-14 10:15:19.315760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.802 [2024-07-14 10:15:19.315780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.802 [2024-07-14 10:15:19.319478] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.802 [2024-07-14 10:15:19.319570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.802 [2024-07-14 10:15:19.319591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.802 [2024-07-14 10:15:19.323236] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.802 [2024-07-14 10:15:19.323343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.802 [2024-07-14 10:15:19.323364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.802 [2024-07-14 10:15:19.327035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.802 [2024-07-14 10:15:19.327161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.802 [2024-07-14 10:15:19.327181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:48.802 [2024-07-14 10:15:19.330722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.802 [2024-07-14 10:15:19.330930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.802 [2024-07-14 10:15:19.330951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:48.802 [2024-07-14 10:15:19.334574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.802 [2024-07-14 10:15:19.334794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.802 [2024-07-14 10:15:19.334824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:48.802 [2024-07-14 10:15:19.338275] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.802 [2024-07-14 10:15:19.338501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.802 [2024-07-14 10:15:19.338526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:48.802 [2024-07-14 10:15:19.342014] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:48.802 [2024-07-14 10:15:19.342113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:48.802 [2024-07-14 10:15:19.342133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.063 [2024-07-14 10:15:19.345962] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.063 [2024-07-14 10:15:19.346062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.063 [2024-07-14 10:15:19.346083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.063 [2024-07-14 10:15:19.349789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.063 [2024-07-14 10:15:19.349888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.063 [2024-07-14 10:15:19.349909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.063 [2024-07-14 10:15:19.353636] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.063 [2024-07-14 10:15:19.353737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.063 [2024-07-14 10:15:19.353758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.063 [2024-07-14 10:15:19.357550] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.063 [2024-07-14 10:15:19.357690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.063 [2024-07-14 10:15:19.357711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.063 [2024-07-14 10:15:19.361295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.063 [2024-07-14 10:15:19.361438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.063 [2024-07-14 10:15:19.361458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.063 [2024-07-14 10:15:19.365172] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.063 [2024-07-14 10:15:19.365412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.063 [2024-07-14 10:15:19.365433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.063 [2024-07-14 10:15:19.368935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.063 [2024-07-14 10:15:19.369137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.063 [2024-07-14 10:15:19.369157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.063 [2024-07-14 10:15:19.372816] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.063 [2024-07-14 10:15:19.372985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.063 [2024-07-14 10:15:19.373006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.063 [2024-07-14 10:15:19.376804] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.063 [2024-07-14 10:15:19.376916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.063 [2024-07-14 10:15:19.376937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.063 [2024-07-14 10:15:19.380754] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.063 [2024-07-14 10:15:19.380854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.063 [2024-07-14 10:15:19.380874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.063 [2024-07-14 10:15:19.384706] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.063 [2024-07-14 10:15:19.384817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.063 [2024-07-14 10:15:19.384837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.063 [2024-07-14 10:15:19.388727] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.063 [2024-07-14 10:15:19.388891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.063 [2024-07-14 10:15:19.388912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.063 [2024-07-14 10:15:19.392675] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.063 [2024-07-14 10:15:19.392822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.063 [2024-07-14 10:15:19.392842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.063 [2024-07-14 10:15:19.396658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.063 [2024-07-14 10:15:19.396864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.063 [2024-07-14 10:15:19.396891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.064 [2024-07-14 10:15:19.400524] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.064 [2024-07-14 10:15:19.400736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.064 [2024-07-14 10:15:19.400762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.064 [2024-07-14 10:15:19.404343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.064 [2024-07-14 10:15:19.404439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.064 [2024-07-14 10:15:19.404460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.064 [2024-07-14 10:15:19.408231] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.064 [2024-07-14 10:15:19.408342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.064 [2024-07-14 10:15:19.408378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.064 [2024-07-14 10:15:19.412077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.064 [2024-07-14 10:15:19.412174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.064 [2024-07-14 10:15:19.412194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.064 [2024-07-14 10:15:19.415972] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.064 [2024-07-14 10:15:19.416082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.064 [2024-07-14 10:15:19.416103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.064 [2024-07-14 10:15:19.420021] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.064 [2024-07-14 10:15:19.420168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.064 [2024-07-14 10:15:19.420189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.064 [2024-07-14 10:15:19.423971] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.064 [2024-07-14 10:15:19.424125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.064 [2024-07-14 10:15:19.424146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.064 [2024-07-14 10:15:19.427938] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.064 [2024-07-14 10:15:19.428130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.064 [2024-07-14 10:15:19.428150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.064 [2024-07-14 10:15:19.431745] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.064 [2024-07-14 10:15:19.431946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.064 [2024-07-14 10:15:19.431966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.064 [2024-07-14 10:15:19.435571] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.064 [2024-07-14 10:15:19.435737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.064 [2024-07-14 10:15:19.435757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.064 [2024-07-14 10:15:19.439390] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.064 [2024-07-14 10:15:19.439489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.064 [2024-07-14 10:15:19.439508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.064 [2024-07-14 10:15:19.443211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.064 [2024-07-14 10:15:19.443323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.064 [2024-07-14 10:15:19.443343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.064 [2024-07-14 10:15:19.447012] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.064 [2024-07-14 10:15:19.447091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.064 [2024-07-14 10:15:19.447111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.064 [2024-07-14 10:15:19.450851] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.064 [2024-07-14 10:15:19.450987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.064 [2024-07-14 10:15:19.451008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.064 [2024-07-14 10:15:19.454579] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.064 [2024-07-14 10:15:19.454770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.064 [2024-07-14 10:15:19.454790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.064 [2024-07-14 10:15:19.458656] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.064 [2024-07-14 10:15:19.458863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.064 [2024-07-14 10:15:19.458884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.064 [2024-07-14 10:15:19.462347] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.064 [2024-07-14 10:15:19.462525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.064 [2024-07-14 10:15:19.462578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.064 [2024-07-14 10:15:19.466156] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.064 [2024-07-14 10:15:19.466347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.064 [2024-07-14 10:15:19.466368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.064 [2024-07-14 10:15:19.469818] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.064 [2024-07-14 10:15:19.469925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.064 [2024-07-14 10:15:19.469945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.064 [2024-07-14 10:15:19.473557] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.064 [2024-07-14 10:15:19.473662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.064 [2024-07-14 10:15:19.473682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.064 [2024-07-14 10:15:19.477360] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.064 [2024-07-14 10:15:19.477464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.064 [2024-07-14 10:15:19.477483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.064 [2024-07-14 10:15:19.481159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.064 [2024-07-14 10:15:19.481312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.064 [2024-07-14 10:15:19.481333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.064 [2024-07-14 10:15:19.484945] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.064 [2024-07-14 10:15:19.485100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.064 [2024-07-14 10:15:19.485119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.064 [2024-07-14 10:15:19.488831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.064 [2024-07-14 10:15:19.489050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.064 [2024-07-14 10:15:19.489071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.064 [2024-07-14 10:15:19.492662] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.064 [2024-07-14 10:15:19.492880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.064 [2024-07-14 10:15:19.492900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.064 [2024-07-14 10:15:19.496393] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.064 [2024-07-14 10:15:19.496539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.064 [2024-07-14 10:15:19.496560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.064 [2024-07-14 10:15:19.500165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.064 [2024-07-14 10:15:19.500301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.064 [2024-07-14 10:15:19.500321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.064 [2024-07-14 10:15:19.503935] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.064 [2024-07-14 10:15:19.504029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.064 [2024-07-14 10:15:19.504049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.064 [2024-07-14 10:15:19.507788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.064 [2024-07-14 10:15:19.507902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-07-14 10:15:19.507922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.065 [2024-07-14 10:15:19.511667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.065 [2024-07-14 10:15:19.511809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-07-14 10:15:19.511829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.065 [2024-07-14 10:15:19.515453] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.065 [2024-07-14 10:15:19.515565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-07-14 10:15:19.515585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.065 [2024-07-14 10:15:19.519228] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.065 [2024-07-14 10:15:19.519416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-07-14 10:15:19.519436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.065 [2024-07-14 10:15:19.522853] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.065 [2024-07-14 10:15:19.523085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-07-14 10:15:19.523137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.065 [2024-07-14 10:15:19.526655] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.065 [2024-07-14 10:15:19.526836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-07-14 10:15:19.526856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.065 [2024-07-14 10:15:19.530484] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.065 [2024-07-14 10:15:19.530587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-07-14 10:15:19.530607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.065 [2024-07-14 10:15:19.534259] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.065 [2024-07-14 10:15:19.534351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-07-14 10:15:19.534371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.065 [2024-07-14 10:15:19.538040] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.065 [2024-07-14 10:15:19.538132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-07-14 10:15:19.538152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.065 [2024-07-14 10:15:19.541831] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.065 [2024-07-14 10:15:19.541988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-07-14 10:15:19.542008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.065 [2024-07-14 10:15:19.545742] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.065 [2024-07-14 10:15:19.545889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-07-14 10:15:19.545909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.065 [2024-07-14 10:15:19.549637] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.065 [2024-07-14 10:15:19.549832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-07-14 10:15:19.549853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.065 [2024-07-14 10:15:19.553431] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.065 [2024-07-14 10:15:19.553673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-07-14 10:15:19.553700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.065 [2024-07-14 10:15:19.557239] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.065 [2024-07-14 10:15:19.557348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-07-14 10:15:19.557369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.065 [2024-07-14 10:15:19.561114] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.065 [2024-07-14 10:15:19.561209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-07-14 10:15:19.561241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.065 [2024-07-14 10:15:19.564884] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.065 [2024-07-14 10:15:19.564972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-07-14 10:15:19.564992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.065 [2024-07-14 10:15:19.568736] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.065 [2024-07-14 10:15:19.568827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-07-14 10:15:19.568847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.065 [2024-07-14 10:15:19.572538] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.065 [2024-07-14 10:15:19.572665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-07-14 10:15:19.572685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.065 [2024-07-14 10:15:19.576247] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.065 [2024-07-14 10:15:19.576390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-07-14 10:15:19.576410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.065 [2024-07-14 10:15:19.579994] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.065 [2024-07-14 10:15:19.580195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-07-14 10:15:19.580226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.065 [2024-07-14 10:15:19.583808] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.065 [2024-07-14 10:15:19.584038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-07-14 10:15:19.584074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.065 [2024-07-14 10:15:19.587643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.065 [2024-07-14 10:15:19.587808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-07-14 10:15:19.587828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.065 [2024-07-14 10:15:19.591858] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.065 [2024-07-14 10:15:19.591967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-07-14 10:15:19.591989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.065 [2024-07-14 10:15:19.596040] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.065 [2024-07-14 10:15:19.596132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-07-14 10:15:19.596153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.065 [2024-07-14 10:15:19.600490] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.065 [2024-07-14 10:15:19.600587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-07-14 10:15:19.600639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.065 [2024-07-14 10:15:19.605034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.065 [2024-07-14 10:15:19.605221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.065 [2024-07-14 10:15:19.605245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.327 [2024-07-14 10:15:19.609421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.327 [2024-07-14 10:15:19.609584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-14 10:15:19.609651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.327 [2024-07-14 10:15:19.613847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.327 [2024-07-14 10:15:19.614038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-14 10:15:19.614058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.327 [2024-07-14 10:15:19.618034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.327 [2024-07-14 10:15:19.618285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-14 10:15:19.618307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.327 [2024-07-14 10:15:19.622110] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.327 [2024-07-14 10:15:19.622326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-14 10:15:19.622348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.327 [2024-07-14 10:15:19.626085] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.327 [2024-07-14 10:15:19.626198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-14 10:15:19.626217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.327 [2024-07-14 10:15:19.629844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.327 [2024-07-14 10:15:19.629940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-14 10:15:19.629960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.327 [2024-07-14 10:15:19.633547] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.327 [2024-07-14 10:15:19.633635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-14 10:15:19.633656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.327 [2024-07-14 10:15:19.637297] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.327 [2024-07-14 10:15:19.637441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-14 10:15:19.637463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.327 [2024-07-14 10:15:19.641034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.327 [2024-07-14 10:15:19.641158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-14 10:15:19.641178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.327 [2024-07-14 10:15:19.644930] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.327 [2024-07-14 10:15:19.645131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-14 10:15:19.645151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.327 [2024-07-14 10:15:19.648806] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.327 [2024-07-14 10:15:19.649049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-14 10:15:19.649117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.327 [2024-07-14 10:15:19.652539] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.327 [2024-07-14 10:15:19.652696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-14 10:15:19.652716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.327 [2024-07-14 10:15:19.656159] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.327 [2024-07-14 10:15:19.656279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-14 10:15:19.656299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.327 [2024-07-14 10:15:19.659840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.327 [2024-07-14 10:15:19.659936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-14 10:15:19.659956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.327 [2024-07-14 10:15:19.663678] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.327 [2024-07-14 10:15:19.663769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-14 10:15:19.663789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.327 [2024-07-14 10:15:19.667555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.327 [2024-07-14 10:15:19.667717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.327 [2024-07-14 10:15:19.667738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.327 [2024-07-14 10:15:19.671413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.328 [2024-07-14 10:15:19.671553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-14 10:15:19.671573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.328 [2024-07-14 10:15:19.675354] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.328 [2024-07-14 10:15:19.675552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-14 10:15:19.675572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.328 [2024-07-14 10:15:19.679163] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.328 [2024-07-14 10:15:19.679403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-14 10:15:19.679455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.328 [2024-07-14 10:15:19.682908] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.328 [2024-07-14 10:15:19.683067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-14 10:15:19.683089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.328 [2024-07-14 10:15:19.686658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.328 [2024-07-14 10:15:19.686748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-14 10:15:19.686769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.328 [2024-07-14 10:15:19.690413] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.328 [2024-07-14 10:15:19.690494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-14 10:15:19.690514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.328 [2024-07-14 10:15:19.694035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.328 [2024-07-14 10:15:19.694125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-14 10:15:19.694145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.328 [2024-07-14 10:15:19.697878] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.328 [2024-07-14 10:15:19.698020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-14 10:15:19.698040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.328 [2024-07-14 10:15:19.701606] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.328 [2024-07-14 10:15:19.701741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-14 10:15:19.701761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.328 [2024-07-14 10:15:19.705555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.328 [2024-07-14 10:15:19.705745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-14 10:15:19.705766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.328 [2024-07-14 10:15:19.709341] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.328 [2024-07-14 10:15:19.709539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-14 10:15:19.709558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.328 [2024-07-14 10:15:19.713196] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.328 [2024-07-14 10:15:19.713374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-14 10:15:19.713395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.328 [2024-07-14 10:15:19.716930] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.328 [2024-07-14 10:15:19.717020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-14 10:15:19.717040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.328 [2024-07-14 10:15:19.720848] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.328 [2024-07-14 10:15:19.720945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-14 10:15:19.720965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.328 [2024-07-14 10:15:19.724644] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.328 [2024-07-14 10:15:19.724734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-14 10:15:19.724755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.328 [2024-07-14 10:15:19.728564] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.328 [2024-07-14 10:15:19.728693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-14 10:15:19.728714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.328 [2024-07-14 10:15:19.732167] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.328 [2024-07-14 10:15:19.732360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-14 10:15:19.732380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.328 [2024-07-14 10:15:19.736029] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.328 [2024-07-14 10:15:19.736235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-14 10:15:19.736256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.328 [2024-07-14 10:15:19.739795] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.328 [2024-07-14 10:15:19.740016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-14 10:15:19.740036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.328 [2024-07-14 10:15:19.743545] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.328 [2024-07-14 10:15:19.743635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-14 10:15:19.743655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.328 [2024-07-14 10:15:19.747383] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.328 [2024-07-14 10:15:19.747499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-14 10:15:19.747519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.328 [2024-07-14 10:15:19.751093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.328 [2024-07-14 10:15:19.751178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-14 10:15:19.751197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.328 [2024-07-14 10:15:19.754754] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.328 [2024-07-14 10:15:19.754858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-14 10:15:19.754878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.328 [2024-07-14 10:15:19.758619] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.328 [2024-07-14 10:15:19.758762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-14 10:15:19.758782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.328 [2024-07-14 10:15:19.762329] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.328 [2024-07-14 10:15:19.762438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-14 10:15:19.762458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.328 [2024-07-14 10:15:19.766058] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.328 [2024-07-14 10:15:19.766250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-14 10:15:19.766284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.328 [2024-07-14 10:15:19.769875] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.328 [2024-07-14 10:15:19.770056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-14 10:15:19.770076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.328 [2024-07-14 10:15:19.773491] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.328 [2024-07-14 10:15:19.773646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-14 10:15:19.773672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.328 [2024-07-14 10:15:19.777101] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.328 [2024-07-14 10:15:19.777261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.328 [2024-07-14 10:15:19.777282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.328 [2024-07-14 10:15:19.781165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.328 [2024-07-14 10:15:19.781322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.329 [2024-07-14 10:15:19.781343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.329 [2024-07-14 10:15:19.785165] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.329 [2024-07-14 10:15:19.785285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.329 [2024-07-14 10:15:19.785307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.329 [2024-07-14 10:15:19.789664] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.329 [2024-07-14 10:15:19.789819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.329 [2024-07-14 10:15:19.789839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.329 [2024-07-14 10:15:19.794085] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.329 [2024-07-14 10:15:19.794268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.329 [2024-07-14 10:15:19.794291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.329 [2024-07-14 10:15:19.798643] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.329 [2024-07-14 10:15:19.798869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.329 [2024-07-14 10:15:19.798892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.329 [2024-07-14 10:15:19.802929] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.329 [2024-07-14 10:15:19.803158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.329 [2024-07-14 10:15:19.803184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.329 [2024-07-14 10:15:19.807023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.329 [2024-07-14 10:15:19.807178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.329 [2024-07-14 10:15:19.807199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.329 [2024-07-14 10:15:19.811198] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.329 [2024-07-14 10:15:19.811381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.329 [2024-07-14 10:15:19.811402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.329 [2024-07-14 10:15:19.815116] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.329 [2024-07-14 10:15:19.815251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.329 [2024-07-14 10:15:19.815287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.329 [2024-07-14 10:15:19.819077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.329 [2024-07-14 10:15:19.819169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.329 [2024-07-14 10:15:19.819189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.329 [2024-07-14 10:15:19.823181] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.329 [2024-07-14 10:15:19.823344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.329 [2024-07-14 10:15:19.823364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.329 [2024-07-14 10:15:19.827040] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.329 [2024-07-14 10:15:19.827159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.329 [2024-07-14 10:15:19.827195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.329 [2024-07-14 10:15:19.831114] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.329 [2024-07-14 10:15:19.831328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.329 [2024-07-14 10:15:19.831350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.329 [2024-07-14 10:15:19.834974] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.329 [2024-07-14 10:15:19.835169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.329 [2024-07-14 10:15:19.835206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.329 [2024-07-14 10:15:19.839189] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.329 [2024-07-14 10:15:19.839386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.329 [2024-07-14 10:15:19.839407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.329 [2024-07-14 10:15:19.843019] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.329 [2024-07-14 10:15:19.843119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.329 [2024-07-14 10:15:19.843154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.329 [2024-07-14 10:15:19.846905] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.329 [2024-07-14 10:15:19.846994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.329 [2024-07-14 10:15:19.847015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.329 [2024-07-14 10:15:19.850898] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.329 [2024-07-14 10:15:19.850979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.329 [2024-07-14 10:15:19.851000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.329 [2024-07-14 10:15:19.854788] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.329 [2024-07-14 10:15:19.854946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.329 [2024-07-14 10:15:19.854967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.329 [2024-07-14 10:15:19.858890] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.329 [2024-07-14 10:15:19.859022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.329 [2024-07-14 10:15:19.859044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.329 [2024-07-14 10:15:19.862953] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.329 [2024-07-14 10:15:19.863216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.329 [2024-07-14 10:15:19.863254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.329 [2024-07-14 10:15:19.867068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.329 [2024-07-14 10:15:19.867345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.329 [2024-07-14 10:15:19.867372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.329 [2024-07-14 10:15:19.871109] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.329 [2024-07-14 10:15:19.871307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.329 [2024-07-14 10:15:19.871329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.590 [2024-07-14 10:15:19.875354] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.590 [2024-07-14 10:15:19.875452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.590 [2024-07-14 10:15:19.875473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.590 [2024-07-14 10:15:19.879297] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.590 [2024-07-14 10:15:19.879394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.590 [2024-07-14 10:15:19.879416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.590 [2024-07-14 10:15:19.883212] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.590 [2024-07-14 10:15:19.883320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.590 [2024-07-14 10:15:19.883341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.590 [2024-07-14 10:15:19.887311] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.590 [2024-07-14 10:15:19.887463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.590 [2024-07-14 10:15:19.887485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.590 [2024-07-14 10:15:19.891346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.590 [2024-07-14 10:15:19.891492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.590 [2024-07-14 10:15:19.891512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.590 [2024-07-14 10:15:19.895434] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.590 [2024-07-14 10:15:19.895630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.590 [2024-07-14 10:15:19.895651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.590 [2024-07-14 10:15:19.899406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.590 [2024-07-14 10:15:19.899632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.590 [2024-07-14 10:15:19.899652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.590 [2024-07-14 10:15:19.903253] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.590 [2024-07-14 10:15:19.903430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.590 [2024-07-14 10:15:19.903452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.590 [2024-07-14 10:15:19.907205] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.590 [2024-07-14 10:15:19.907342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.590 [2024-07-14 10:15:19.907362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.590 [2024-07-14 10:15:19.911368] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.590 [2024-07-14 10:15:19.911475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.590 [2024-07-14 10:15:19.911497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.590 [2024-07-14 10:15:19.915323] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.590 [2024-07-14 10:15:19.915423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.590 [2024-07-14 10:15:19.915444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.590 [2024-07-14 10:15:19.919386] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.590 [2024-07-14 10:15:19.919534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.590 [2024-07-14 10:15:19.919554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.590 [2024-07-14 10:15:19.923260] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.591 [2024-07-14 10:15:19.923393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.591 [2024-07-14 10:15:19.923413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.591 [2024-07-14 10:15:19.927115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.591 [2024-07-14 10:15:19.927381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.591 [2024-07-14 10:15:19.927408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.591 [2024-07-14 10:15:19.931334] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.591 [2024-07-14 10:15:19.931565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.591 [2024-07-14 10:15:19.931591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.591 [2024-07-14 10:15:19.935092] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.591 [2024-07-14 10:15:19.935281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.591 [2024-07-14 10:15:19.935303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.591 [2024-07-14 10:15:19.938996] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.591 [2024-07-14 10:15:19.939073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.591 [2024-07-14 10:15:19.939093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.591 [2024-07-14 10:15:19.942908] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.591 [2024-07-14 10:15:19.942992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.591 [2024-07-14 10:15:19.943013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.591 [2024-07-14 10:15:19.946982] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.591 [2024-07-14 10:15:19.947059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.591 [2024-07-14 10:15:19.947079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.591 [2024-07-14 10:15:19.950968] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.591 [2024-07-14 10:15:19.951108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.591 [2024-07-14 10:15:19.951145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.591 [2024-07-14 10:15:19.954860] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.591 [2024-07-14 10:15:19.954995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.591 [2024-07-14 10:15:19.955015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.591 [2024-07-14 10:15:19.959108] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.591 [2024-07-14 10:15:19.959408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.591 [2024-07-14 10:15:19.959436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.591 [2024-07-14 10:15:19.963064] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.591 [2024-07-14 10:15:19.963339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.591 [2024-07-14 10:15:19.963365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.591 [2024-07-14 10:15:19.967050] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.591 [2024-07-14 10:15:19.967253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.591 [2024-07-14 10:15:19.967304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.591 [2024-07-14 10:15:19.971264] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.591 [2024-07-14 10:15:19.971369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.591 [2024-07-14 10:15:19.971389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.591 [2024-07-14 10:15:19.975847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.591 [2024-07-14 10:15:19.975957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.591 [2024-07-14 10:15:19.975980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.591 [2024-07-14 10:15:19.980615] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.591 [2024-07-14 10:15:19.980722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.591 [2024-07-14 10:15:19.980743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.591 [2024-07-14 10:15:19.984826] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.591 [2024-07-14 10:15:19.984967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.591 [2024-07-14 10:15:19.984987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.591 [2024-07-14 10:15:19.989016] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.591 [2024-07-14 10:15:19.989170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.591 [2024-07-14 10:15:19.989190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.591 [2024-07-14 10:15:19.992877] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.591 [2024-07-14 10:15:19.993075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.591 [2024-07-14 10:15:19.993094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.591 [2024-07-14 10:15:19.996648] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.591 [2024-07-14 10:15:19.996863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.591 [2024-07-14 10:15:19.996883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.591 [2024-07-14 10:15:20.000573] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.591 [2024-07-14 10:15:20.000756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.591 [2024-07-14 10:15:20.000776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.591 [2024-07-14 10:15:20.004313] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.591 [2024-07-14 10:15:20.004411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.591 [2024-07-14 10:15:20.004431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.591 [2024-07-14 10:15:20.008580] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.591 [2024-07-14 10:15:20.008676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.591 [2024-07-14 10:15:20.008700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.591 [2024-07-14 10:15:20.012658] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.591 [2024-07-14 10:15:20.012776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.591 [2024-07-14 10:15:20.012796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.591 [2024-07-14 10:15:20.016660] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.591 [2024-07-14 10:15:20.016807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.591 [2024-07-14 10:15:20.016828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.591 [2024-07-14 10:15:20.020876] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.591 [2024-07-14 10:15:20.021033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.591 [2024-07-14 10:15:20.021068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.591 [2024-07-14 10:15:20.024959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.591 [2024-07-14 10:15:20.025178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.591 [2024-07-14 10:15:20.025212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.591 [2024-07-14 10:15:20.029447] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.591 [2024-07-14 10:15:20.029747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.591 [2024-07-14 10:15:20.029793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.591 [2024-07-14 10:15:20.034398] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.591 [2024-07-14 10:15:20.034568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.591 [2024-07-14 10:15:20.034589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.591 [2024-07-14 10:15:20.038450] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.591 [2024-07-14 10:15:20.038550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.591 [2024-07-14 10:15:20.038573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.591 [2024-07-14 10:15:20.043795] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.592 [2024-07-14 10:15:20.043919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.592 [2024-07-14 10:15:20.043940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.592 [2024-07-14 10:15:20.047809] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.592 [2024-07-14 10:15:20.047905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.592 [2024-07-14 10:15:20.047926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.592 [2024-07-14 10:15:20.051917] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.592 [2024-07-14 10:15:20.052061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.592 [2024-07-14 10:15:20.052081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.592 [2024-07-14 10:15:20.055858] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.592 [2024-07-14 10:15:20.055997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.592 [2024-07-14 10:15:20.056017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.592 [2024-07-14 10:15:20.059943] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.592 [2024-07-14 10:15:20.060140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.592 [2024-07-14 10:15:20.060161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.592 [2024-07-14 10:15:20.063929] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.592 [2024-07-14 10:15:20.064144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.592 [2024-07-14 10:15:20.064165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.592 [2024-07-14 10:15:20.068029] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.592 [2024-07-14 10:15:20.068204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.592 [2024-07-14 10:15:20.068225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.592 [2024-07-14 10:15:20.071949] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.592 [2024-07-14 10:15:20.072061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.592 [2024-07-14 10:15:20.072081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.592 [2024-07-14 10:15:20.075843] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.592 [2024-07-14 10:15:20.075953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.592 [2024-07-14 10:15:20.075974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.592 [2024-07-14 10:15:20.079730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.592 [2024-07-14 10:15:20.079821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.592 [2024-07-14 10:15:20.079841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.592 [2024-07-14 10:15:20.083657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.592 [2024-07-14 10:15:20.083808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.592 [2024-07-14 10:15:20.083830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.592 [2024-07-14 10:15:20.087609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.592 [2024-07-14 10:15:20.087769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.592 [2024-07-14 10:15:20.087789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.592 [2024-07-14 10:15:20.091589] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.592 [2024-07-14 10:15:20.091785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.592 [2024-07-14 10:15:20.091805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.592 [2024-07-14 10:15:20.095469] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.592 [2024-07-14 10:15:20.095710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.592 [2024-07-14 10:15:20.095736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.592 [2024-07-14 10:15:20.099275] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.592 [2024-07-14 10:15:20.099384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.592 [2024-07-14 10:15:20.099404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.592 [2024-07-14 10:15:20.103077] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.592 [2024-07-14 10:15:20.103206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.592 [2024-07-14 10:15:20.103226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.592 [2024-07-14 10:15:20.106931] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.592 [2024-07-14 10:15:20.107030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.592 [2024-07-14 10:15:20.107052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.592 [2024-07-14 10:15:20.110916] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.592 [2024-07-14 10:15:20.111013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.592 [2024-07-14 10:15:20.111034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.592 [2024-07-14 10:15:20.114703] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.592 [2024-07-14 10:15:20.114855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.592 [2024-07-14 10:15:20.114876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.592 [2024-07-14 10:15:20.118482] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.592 [2024-07-14 10:15:20.118621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.592 [2024-07-14 10:15:20.118640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.592 [2024-07-14 10:15:20.122469] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.592 [2024-07-14 10:15:20.122650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.592 [2024-07-14 10:15:20.122676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.592 [2024-07-14 10:15:20.126171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.592 [2024-07-14 10:15:20.126376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.592 [2024-07-14 10:15:20.126397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.592 [2024-07-14 10:15:20.129952] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.592 [2024-07-14 10:15:20.130119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.592 [2024-07-14 10:15:20.130139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.592 [2024-07-14 10:15:20.133689] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.592 [2024-07-14 10:15:20.133796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.592 [2024-07-14 10:15:20.133816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.854 [2024-07-14 10:15:20.137465] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.854 [2024-07-14 10:15:20.137569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.854 [2024-07-14 10:15:20.137590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.854 [2024-07-14 10:15:20.141284] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.854 [2024-07-14 10:15:20.141383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.854 [2024-07-14 10:15:20.141402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.854 [2024-07-14 10:15:20.145070] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.854 [2024-07-14 10:15:20.145229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.854 [2024-07-14 10:15:20.145250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.854 [2024-07-14 10:15:20.148894] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.854 [2024-07-14 10:15:20.149022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.854 [2024-07-14 10:15:20.149041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.854 [2024-07-14 10:15:20.152835] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.854 [2024-07-14 10:15:20.153033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.854 [2024-07-14 10:15:20.153053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.854 [2024-07-14 10:15:20.156616] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.854 [2024-07-14 10:15:20.156844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.854 [2024-07-14 10:15:20.156880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.854 [2024-07-14 10:15:20.160534] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.854 [2024-07-14 10:15:20.160709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.854 [2024-07-14 10:15:20.160730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.854 [2024-07-14 10:15:20.164295] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.854 [2024-07-14 10:15:20.164386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.854 [2024-07-14 10:15:20.164407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.854 [2024-07-14 10:15:20.168323] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.854 [2024-07-14 10:15:20.168423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.854 [2024-07-14 10:15:20.168443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.854 [2024-07-14 10:15:20.172019] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.854 [2024-07-14 10:15:20.172136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.854 [2024-07-14 10:15:20.172155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.854 [2024-07-14 10:15:20.175921] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.854 [2024-07-14 10:15:20.176072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.854 [2024-07-14 10:15:20.176092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.854 [2024-07-14 10:15:20.179777] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.854 [2024-07-14 10:15:20.179922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.854 [2024-07-14 10:15:20.179942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.854 [2024-07-14 10:15:20.183754] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.854 [2024-07-14 10:15:20.183949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.854 [2024-07-14 10:15:20.183969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.854 [2024-07-14 10:15:20.187663] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.854 [2024-07-14 10:15:20.187893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.854 [2024-07-14 10:15:20.187929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.854 [2024-07-14 10:15:20.191483] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.854 [2024-07-14 10:15:20.191569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.854 [2024-07-14 10:15:20.191589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.854 [2024-07-14 10:15:20.195146] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.854 [2024-07-14 10:15:20.195273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.854 [2024-07-14 10:15:20.195294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.854 [2024-07-14 10:15:20.198874] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.854 [2024-07-14 10:15:20.198965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.854 [2024-07-14 10:15:20.198986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.854 [2024-07-14 10:15:20.202614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.854 [2024-07-14 10:15:20.202705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.854 [2024-07-14 10:15:20.202726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.854 [2024-07-14 10:15:20.206365] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.854 [2024-07-14 10:15:20.206506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.854 [2024-07-14 10:15:20.206526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.854 [2024-07-14 10:15:20.210133] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.210272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.210292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.214023] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.214232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.214253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.217799] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.218057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.218109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.221628] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.221722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.221742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.225460] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.225578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.225598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.229226] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.229325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.229345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.232954] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.233064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.233083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.236848] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.237006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.237026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.240651] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.240804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.240824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.244635] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.244826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.244846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.248446] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.248716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.248743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.252197] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.252440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.252466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.255877] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.255980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.255999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.259670] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.259765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.259785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.263499] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.263596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.263632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.267349] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.267501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.267522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.271141] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.271275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.271294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.274933] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.275144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.275186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.278769] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.279003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.279029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.282437] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.282521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.282540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.286052] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.286149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.286169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.289929] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.290041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.290061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.293705] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.293795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.293815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.297571] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.297730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.297751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.301292] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.301436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.301455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.305150] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.305366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.305392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.308851] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.309091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.309116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.312767] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.312859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.312879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.316674] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.316793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.316812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.320444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.320540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.320560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.324115] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.324206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.324238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.327877] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.328019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.328039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.331617] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.331803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.331822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.335518] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.335715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.335735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.339378] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.339603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.339639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.343068] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.343216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.343237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.346785] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.346889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.346910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.350514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.350611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.350631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.354533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.354624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.354645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:49.855 [2024-07-14 10:15:20.358425] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2226a90) with pdu=0x2000190fef90 00:22:49.855 [2024-07-14 10:15:20.358576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:49.855 [2024-07-14 10:15:20.358597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:49.855 00:22:49.855 Latency(us) 00:22:49.855 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:49.855 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:22:49.856 nvme0n1 : 2.00 7911.62 988.95 0.00 0.00 2017.68 1452.22 5332.25 00:22:49.856 =================================================================================================================== 00:22:49.856 Total : 7911.62 988.95 0.00 0.00 2017.68 1452.22 5332.25 00:22:49.856 0 00:22:49.856 10:15:20 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:22:49.856 10:15:20 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:22:49.856 10:15:20 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:22:49.856 | .driver_specific 00:22:49.856 | .nvme_error 00:22:49.856 | .status_code 00:22:49.856 | .command_transient_transport_error' 00:22:49.856 10:15:20 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:22:50.115 10:15:20 -- host/digest.sh@71 -- # (( 510 > 0 )) 00:22:50.115 10:15:20 -- host/digest.sh@73 -- # killprocess 97285 00:22:50.115 10:15:20 -- common/autotest_common.sh@926 -- # '[' -z 97285 ']' 00:22:50.115 10:15:20 -- common/autotest_common.sh@930 -- # kill -0 97285 00:22:50.115 10:15:20 -- common/autotest_common.sh@931 -- # uname 00:22:50.115 10:15:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:50.115 10:15:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97285 00:22:50.373 10:15:20 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:22:50.373 10:15:20 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:22:50.373 killing process with pid 97285 00:22:50.373 10:15:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97285' 00:22:50.373 10:15:20 -- common/autotest_common.sh@945 -- # kill 97285 00:22:50.373 Received shutdown signal, test time was about 2.000000 seconds 00:22:50.373 00:22:50.373 Latency(us) 00:22:50.373 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:50.373 =================================================================================================================== 00:22:50.373 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:50.373 10:15:20 -- common/autotest_common.sh@950 -- # wait 97285 00:22:50.373 10:15:20 -- host/digest.sh@115 -- # killprocess 96975 00:22:50.373 10:15:20 -- common/autotest_common.sh@926 -- # '[' -z 96975 ']' 00:22:50.373 10:15:20 -- common/autotest_common.sh@930 -- # kill -0 96975 00:22:50.373 10:15:20 -- common/autotest_common.sh@931 -- # uname 00:22:50.373 10:15:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:22:50.373 10:15:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 96975 00:22:50.373 10:15:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:22:50.373 10:15:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:22:50.373 killing process with pid 96975 00:22:50.373 10:15:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 96975' 00:22:50.373 10:15:20 -- common/autotest_common.sh@945 -- # kill 96975 00:22:50.373 10:15:20 -- common/autotest_common.sh@950 -- # wait 96975 00:22:50.631 00:22:50.631 real 0m17.839s 00:22:50.631 user 0m33.406s 00:22:50.631 sys 0m4.777s 00:22:50.631 10:15:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:50.631 10:15:21 -- common/autotest_common.sh@10 -- # set +x 00:22:50.631 ************************************ 00:22:50.631 END TEST nvmf_digest_error 00:22:50.631 ************************************ 00:22:50.631 10:15:21 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:22:50.631 10:15:21 -- host/digest.sh@139 -- # nvmftestfini 00:22:50.631 10:15:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:50.631 10:15:21 -- nvmf/common.sh@116 -- # sync 00:22:50.890 10:15:21 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:22:50.890 10:15:21 -- nvmf/common.sh@119 -- # set +e 00:22:50.890 10:15:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:50.890 10:15:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:22:50.890 rmmod nvme_tcp 00:22:50.890 rmmod nvme_fabrics 00:22:50.890 rmmod nvme_keyring 00:22:50.890 10:15:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:50.890 10:15:21 -- nvmf/common.sh@123 -- # set -e 00:22:50.890 10:15:21 -- nvmf/common.sh@124 -- # return 0 00:22:50.890 10:15:21 -- nvmf/common.sh@477 -- # '[' -n 96975 ']' 00:22:50.890 10:15:21 -- nvmf/common.sh@478 -- # killprocess 96975 00:22:50.890 10:15:21 -- common/autotest_common.sh@926 -- # '[' -z 96975 ']' 00:22:50.890 10:15:21 -- common/autotest_common.sh@930 -- # kill -0 96975 00:22:50.890 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (96975) - No such process 00:22:50.890 Process with pid 96975 is not found 00:22:50.890 10:15:21 -- common/autotest_common.sh@953 -- # echo 'Process with pid 96975 is not found' 00:22:50.890 10:15:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:50.890 10:15:21 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:22:50.890 10:15:21 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:22:50.890 10:15:21 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:50.890 10:15:21 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:22:50.890 10:15:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.890 10:15:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:50.890 10:15:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.890 10:15:21 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:22:50.890 00:22:50.890 real 0m36.768s 00:22:50.890 user 1m7.827s 00:22:50.890 sys 0m9.833s 00:22:50.890 10:15:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:50.890 ************************************ 00:22:50.890 END TEST nvmf_digest 00:22:50.890 10:15:21 -- common/autotest_common.sh@10 -- # set +x 00:22:50.890 ************************************ 00:22:50.890 10:15:21 -- nvmf/nvmf.sh@110 -- # [[ 1 -eq 1 ]] 00:22:50.890 10:15:21 -- nvmf/nvmf.sh@110 -- # [[ tcp == \t\c\p ]] 00:22:50.890 10:15:21 -- nvmf/nvmf.sh@112 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:22:50.890 10:15:21 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:22:50.890 10:15:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:22:50.890 10:15:21 -- common/autotest_common.sh@10 -- # set +x 00:22:50.890 ************************************ 00:22:50.890 START TEST nvmf_mdns_discovery 00:22:50.890 ************************************ 00:22:50.890 10:15:21 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:22:50.890 * Looking for test storage... 00:22:50.890 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:50.890 10:15:21 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:50.890 10:15:21 -- nvmf/common.sh@7 -- # uname -s 00:22:50.890 10:15:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:50.890 10:15:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:50.890 10:15:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:50.890 10:15:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:50.890 10:15:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:50.890 10:15:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:50.890 10:15:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:50.890 10:15:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:50.890 10:15:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:50.890 10:15:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:50.890 10:15:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:22:50.890 10:15:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:22:50.890 10:15:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:50.890 10:15:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:50.890 10:15:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:50.890 10:15:21 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:50.890 10:15:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:50.890 10:15:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:50.890 10:15:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:50.890 10:15:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.890 10:15:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.890 10:15:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.890 10:15:21 -- paths/export.sh@5 -- # export PATH 00:22:50.890 10:15:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:50.890 10:15:21 -- nvmf/common.sh@46 -- # : 0 00:22:50.890 10:15:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:50.890 10:15:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:50.890 10:15:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:50.890 10:15:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:50.890 10:15:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:50.890 10:15:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:50.890 10:15:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:50.890 10:15:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:50.890 10:15:21 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:22:50.890 10:15:21 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:22:50.890 10:15:21 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:50.890 10:15:21 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:50.890 10:15:21 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:22:50.890 10:15:21 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:50.890 10:15:21 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:22:50.890 10:15:21 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:22:50.890 10:15:21 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:22:50.890 10:15:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:50.890 10:15:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:50.890 10:15:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:50.890 10:15:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:50.890 10:15:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.890 10:15:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:50.890 10:15:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:50.890 10:15:21 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:22:50.890 10:15:21 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:22:50.890 10:15:21 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:22:50.890 10:15:21 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:22:50.890 10:15:21 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:22:50.890 10:15:21 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:22:50.890 10:15:21 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:50.890 10:15:21 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:50.890 10:15:21 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:50.890 10:15:21 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:22:50.890 10:15:21 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:50.890 10:15:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:50.890 10:15:21 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:50.890 10:15:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:50.890 10:15:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:50.890 10:15:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:50.890 10:15:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:50.890 10:15:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:50.890 10:15:21 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:22:51.149 10:15:21 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:22:51.149 Cannot find device "nvmf_tgt_br" 00:22:51.149 10:15:21 -- nvmf/common.sh@154 -- # true 00:22:51.149 10:15:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:22:51.149 Cannot find device "nvmf_tgt_br2" 00:22:51.149 10:15:21 -- nvmf/common.sh@155 -- # true 00:22:51.149 10:15:21 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:22:51.149 10:15:21 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:22:51.149 Cannot find device "nvmf_tgt_br" 00:22:51.149 10:15:21 -- nvmf/common.sh@157 -- # true 00:22:51.149 10:15:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:22:51.149 Cannot find device "nvmf_tgt_br2" 00:22:51.149 10:15:21 -- nvmf/common.sh@158 -- # true 00:22:51.149 10:15:21 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:22:51.149 10:15:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:22:51.149 10:15:21 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:51.149 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:51.149 10:15:21 -- nvmf/common.sh@161 -- # true 00:22:51.149 10:15:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:51.149 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:51.149 10:15:21 -- nvmf/common.sh@162 -- # true 00:22:51.149 10:15:21 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:22:51.149 10:15:21 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:51.149 10:15:21 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:51.149 10:15:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:51.149 10:15:21 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:51.149 10:15:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:51.149 10:15:21 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:51.149 10:15:21 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:51.149 10:15:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:51.149 10:15:21 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:22:51.149 10:15:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:22:51.149 10:15:21 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:22:51.149 10:15:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:22:51.149 10:15:21 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:51.149 10:15:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:51.149 10:15:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:51.149 10:15:21 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:22:51.149 10:15:21 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:22:51.149 10:15:21 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:22:51.149 10:15:21 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:51.407 10:15:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:51.408 10:15:21 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:51.408 10:15:21 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:51.408 10:15:21 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:22:51.408 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.408 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.076 ms 00:22:51.408 00:22:51.408 --- 10.0.0.2 ping statistics --- 00:22:51.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.408 rtt min/avg/max/mdev = 0.076/0.076/0.076/0.000 ms 00:22:51.408 10:15:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:22:51.408 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:51.408 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:22:51.408 00:22:51.408 --- 10.0.0.3 ping statistics --- 00:22:51.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.408 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:22:51.408 10:15:21 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:51.408 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.408 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:22:51.408 00:22:51.408 --- 10.0.0.1 ping statistics --- 00:22:51.408 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.408 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:22:51.408 10:15:21 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.408 10:15:21 -- nvmf/common.sh@421 -- # return 0 00:22:51.408 10:15:21 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:51.408 10:15:21 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.408 10:15:21 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:22:51.408 10:15:21 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:22:51.408 10:15:21 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.408 10:15:21 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:22:51.408 10:15:21 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:22:51.408 10:15:21 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:51.408 10:15:21 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:51.408 10:15:21 -- common/autotest_common.sh@712 -- # xtrace_disable 00:22:51.408 10:15:21 -- common/autotest_common.sh@10 -- # set +x 00:22:51.408 10:15:21 -- nvmf/common.sh@469 -- # nvmfpid=97579 00:22:51.408 10:15:21 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:51.408 10:15:21 -- nvmf/common.sh@470 -- # waitforlisten 97579 00:22:51.408 10:15:21 -- common/autotest_common.sh@819 -- # '[' -z 97579 ']' 00:22:51.408 10:15:21 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.408 10:15:21 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:51.408 10:15:21 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.408 10:15:21 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:51.408 10:15:21 -- common/autotest_common.sh@10 -- # set +x 00:22:51.408 [2024-07-14 10:15:21.828804] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:51.408 [2024-07-14 10:15:21.828918] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.665 [2024-07-14 10:15:21.971517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.665 [2024-07-14 10:15:22.049194] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:51.665 [2024-07-14 10:15:22.049406] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.665 [2024-07-14 10:15:22.049420] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.665 [2024-07-14 10:15:22.049429] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.665 [2024-07-14 10:15:22.049458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.598 10:15:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:52.598 10:15:22 -- common/autotest_common.sh@852 -- # return 0 00:22:52.598 10:15:22 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:52.598 10:15:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:22:52.598 10:15:22 -- common/autotest_common.sh@10 -- # set +x 00:22:52.598 10:15:22 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.599 10:15:22 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:22:52.599 10:15:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:52.599 10:15:22 -- common/autotest_common.sh@10 -- # set +x 00:22:52.599 10:15:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:52.599 10:15:22 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:22:52.599 10:15:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:52.599 10:15:22 -- common/autotest_common.sh@10 -- # set +x 00:22:52.599 10:15:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:52.599 10:15:22 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:52.599 10:15:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:52.599 10:15:22 -- common/autotest_common.sh@10 -- # set +x 00:22:52.599 [2024-07-14 10:15:22.963548] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.599 10:15:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:52.599 10:15:22 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:52.599 10:15:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:52.599 10:15:22 -- common/autotest_common.sh@10 -- # set +x 00:22:52.599 [2024-07-14 10:15:22.971709] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:52.599 10:15:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:52.599 10:15:22 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:52.599 10:15:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:52.599 10:15:22 -- common/autotest_common.sh@10 -- # set +x 00:22:52.599 null0 00:22:52.599 10:15:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:52.599 10:15:22 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:52.599 10:15:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:52.599 10:15:22 -- common/autotest_common.sh@10 -- # set +x 00:22:52.599 null1 00:22:52.599 10:15:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:52.599 10:15:22 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:22:52.599 10:15:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:52.599 10:15:22 -- common/autotest_common.sh@10 -- # set +x 00:22:52.599 null2 00:22:52.599 10:15:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:52.599 10:15:22 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:22:52.599 10:15:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:52.599 10:15:22 -- common/autotest_common.sh@10 -- # set +x 00:22:52.599 null3 00:22:52.599 10:15:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:52.599 10:15:23 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:22:52.599 10:15:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:52.599 10:15:23 -- common/autotest_common.sh@10 -- # set +x 00:22:52.599 10:15:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:52.599 10:15:23 -- host/mdns_discovery.sh@47 -- # hostpid=97629 00:22:52.599 10:15:23 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:52.599 10:15:23 -- host/mdns_discovery.sh@48 -- # waitforlisten 97629 /tmp/host.sock 00:22:52.599 10:15:23 -- common/autotest_common.sh@819 -- # '[' -z 97629 ']' 00:22:52.599 10:15:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:22:52.599 10:15:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:22:52.599 10:15:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:52.599 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:52.599 10:15:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:22:52.599 10:15:23 -- common/autotest_common.sh@10 -- # set +x 00:22:52.599 [2024-07-14 10:15:23.073788] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:22:52.599 [2024-07-14 10:15:23.073884] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid97629 ] 00:22:52.856 [2024-07-14 10:15:23.211399] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.856 [2024-07-14 10:15:23.284499] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:52.856 [2024-07-14 10:15:23.284673] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:53.861 10:15:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:22:53.861 10:15:24 -- common/autotest_common.sh@852 -- # return 0 00:22:53.861 10:15:24 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:22:53.861 10:15:24 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:22:53.861 10:15:24 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:22:53.861 10:15:24 -- host/mdns_discovery.sh@57 -- # avahipid=97658 00:22:53.861 10:15:24 -- host/mdns_discovery.sh@58 -- # sleep 1 00:22:53.861 10:15:24 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:22:53.861 10:15:24 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:22:53.861 Process 986 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:22:53.861 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:22:53.861 Successfully dropped root privileges. 00:22:53.861 avahi-daemon 0.8 starting up. 00:22:53.861 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:22:53.861 Successfully called chroot(). 00:22:53.861 Successfully dropped remaining capabilities. 00:22:54.793 No service file found in /etc/avahi/services. 00:22:54.793 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:22:54.793 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:22:54.793 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:22:54.793 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:22:54.793 Network interface enumeration completed. 00:22:54.793 Registering new address record for fe80::a44e:24ff:fe72:6342 on nvmf_tgt_if2.*. 00:22:54.793 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:22:54.793 Registering new address record for fe80::e073:5fff:fecc:6446 on nvmf_tgt_if.*. 00:22:54.793 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:22:54.793 Server startup complete. Host name is fedora38-cloud-1716830599-074-updated-1705279005.local. Local service cookie is 3427527450. 00:22:54.793 10:15:25 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:54.793 10:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.793 10:15:25 -- common/autotest_common.sh@10 -- # set +x 00:22:54.793 10:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.793 10:15:25 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:22:54.793 10:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.793 10:15:25 -- common/autotest_common.sh@10 -- # set +x 00:22:54.793 10:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.793 10:15:25 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:22:54.793 10:15:25 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:22:54.793 10:15:25 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:54.793 10:15:25 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:54.793 10:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.793 10:15:25 -- common/autotest_common.sh@10 -- # set +x 00:22:54.793 10:15:25 -- host/mdns_discovery.sh@68 -- # sort 00:22:54.793 10:15:25 -- host/mdns_discovery.sh@68 -- # xargs 00:22:54.793 10:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.793 10:15:25 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:22:54.793 10:15:25 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:22:54.793 10:15:25 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:54.793 10:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.793 10:15:25 -- common/autotest_common.sh@10 -- # set +x 00:22:54.794 10:15:25 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:54.794 10:15:25 -- host/mdns_discovery.sh@64 -- # sort 00:22:54.794 10:15:25 -- host/mdns_discovery.sh@64 -- # xargs 00:22:54.794 10:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.794 10:15:25 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:22:54.794 10:15:25 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:54.794 10:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.794 10:15:25 -- common/autotest_common.sh@10 -- # set +x 00:22:54.794 10:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:54.794 10:15:25 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:22:54.794 10:15:25 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:54.794 10:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:54.794 10:15:25 -- common/autotest_common.sh@10 -- # set +x 00:22:54.794 10:15:25 -- host/mdns_discovery.sh@68 -- # sort 00:22:54.794 10:15:25 -- host/mdns_discovery.sh@68 -- # xargs 00:22:54.794 10:15:25 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:54.794 10:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.052 10:15:25 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:22:55.052 10:15:25 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:22:55.052 10:15:25 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:55.052 10:15:25 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:55.052 10:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.052 10:15:25 -- host/mdns_discovery.sh@64 -- # sort 00:22:55.052 10:15:25 -- common/autotest_common.sh@10 -- # set +x 00:22:55.052 10:15:25 -- host/mdns_discovery.sh@64 -- # xargs 00:22:55.052 10:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.052 10:15:25 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:22:55.052 10:15:25 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:55.052 10:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.052 10:15:25 -- common/autotest_common.sh@10 -- # set +x 00:22:55.052 10:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.052 10:15:25 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:22:55.052 10:15:25 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:22:55.052 10:15:25 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:55.052 10:15:25 -- host/mdns_discovery.sh@68 -- # sort 00:22:55.052 10:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.052 10:15:25 -- host/mdns_discovery.sh@68 -- # xargs 00:22:55.052 10:15:25 -- common/autotest_common.sh@10 -- # set +x 00:22:55.052 10:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.052 [2024-07-14 10:15:25.489197] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:22:55.052 10:15:25 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:22:55.052 10:15:25 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:22:55.052 10:15:25 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:22:55.052 10:15:25 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:55.052 10:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.052 10:15:25 -- host/mdns_discovery.sh@64 -- # sort 00:22:55.052 10:15:25 -- common/autotest_common.sh@10 -- # set +x 00:22:55.052 10:15:25 -- host/mdns_discovery.sh@64 -- # xargs 00:22:55.052 10:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.052 10:15:25 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:22:55.052 10:15:25 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:55.052 10:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.052 10:15:25 -- common/autotest_common.sh@10 -- # set +x 00:22:55.052 [2024-07-14 10:15:25.552389] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.052 10:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.052 10:15:25 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:55.052 10:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.052 10:15:25 -- common/autotest_common.sh@10 -- # set +x 00:22:55.052 10:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.052 10:15:25 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:22:55.052 10:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.052 10:15:25 -- common/autotest_common.sh@10 -- # set +x 00:22:55.052 10:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.052 10:15:25 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:22:55.052 10:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.052 10:15:25 -- common/autotest_common.sh@10 -- # set +x 00:22:55.052 10:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.052 10:15:25 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:22:55.052 10:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.052 10:15:25 -- common/autotest_common.sh@10 -- # set +x 00:22:55.052 10:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.052 10:15:25 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:22:55.052 10:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.052 10:15:25 -- common/autotest_common.sh@10 -- # set +x 00:22:55.052 [2024-07-14 10:15:25.592345] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:22:55.052 10:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.052 10:15:25 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:22:55.052 10:15:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:22:55.052 10:15:25 -- common/autotest_common.sh@10 -- # set +x 00:22:55.309 [2024-07-14 10:15:25.600328] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:55.309 10:15:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:22:55.309 10:15:25 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=97709 00:22:55.309 10:15:25 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:22:55.309 10:15:25 -- host/mdns_discovery.sh@125 -- # sleep 5 00:22:55.874 [2024-07-14 10:15:26.389197] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:22:56.132 Established under name 'CDC' 00:22:56.390 [2024-07-14 10:15:26.789215] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:22:56.390 [2024-07-14 10:15:26.789241] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:22:56.390 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:22:56.390 cookie is 0 00:22:56.390 is_local: 1 00:22:56.390 our_own: 0 00:22:56.390 wide_area: 0 00:22:56.390 multicast: 1 00:22:56.390 cached: 1 00:22:56.390 [2024-07-14 10:15:26.889198] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:22:56.390 [2024-07-14 10:15:26.889242] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:22:56.390 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:22:56.390 cookie is 0 00:22:56.390 is_local: 1 00:22:56.390 our_own: 0 00:22:56.390 wide_area: 0 00:22:56.390 multicast: 1 00:22:56.390 cached: 1 00:22:57.324 [2024-07-14 10:15:27.797737] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:57.324 [2024-07-14 10:15:27.797770] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:57.324 [2024-07-14 10:15:27.797803] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:57.582 [2024-07-14 10:15:27.883824] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:22:57.582 [2024-07-14 10:15:27.897472] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:57.582 [2024-07-14 10:15:27.897497] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:57.582 [2024-07-14 10:15:27.897527] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:57.582 [2024-07-14 10:15:27.946026] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:22:57.582 [2024-07-14 10:15:27.946054] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:22:57.582 [2024-07-14 10:15:27.984258] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:22:57.582 [2024-07-14 10:15:28.038851] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:22:57.582 [2024-07-14 10:15:28.038878] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:00.113 10:15:30 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:23:00.113 10:15:30 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:00.113 10:15:30 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:00.113 10:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:00.113 10:15:30 -- common/autotest_common.sh@10 -- # set +x 00:23:00.113 10:15:30 -- host/mdns_discovery.sh@80 -- # sort 00:23:00.113 10:15:30 -- host/mdns_discovery.sh@80 -- # xargs 00:23:00.113 10:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:00.113 10:15:30 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:23:00.113 10:15:30 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:23:00.371 10:15:30 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:00.371 10:15:30 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:00.371 10:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:00.371 10:15:30 -- host/mdns_discovery.sh@76 -- # sort 00:23:00.371 10:15:30 -- common/autotest_common.sh@10 -- # set +x 00:23:00.371 10:15:30 -- host/mdns_discovery.sh@76 -- # xargs 00:23:00.371 10:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:00.371 10:15:30 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:00.371 10:15:30 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:23:00.371 10:15:30 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:00.371 10:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:00.371 10:15:30 -- common/autotest_common.sh@10 -- # set +x 00:23:00.371 10:15:30 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:00.371 10:15:30 -- host/mdns_discovery.sh@68 -- # sort 00:23:00.371 10:15:30 -- host/mdns_discovery.sh@68 -- # xargs 00:23:00.371 10:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:00.371 10:15:30 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:00.371 10:15:30 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:23:00.371 10:15:30 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:00.371 10:15:30 -- host/mdns_discovery.sh@64 -- # xargs 00:23:00.371 10:15:30 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:00.371 10:15:30 -- host/mdns_discovery.sh@64 -- # sort 00:23:00.371 10:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:00.371 10:15:30 -- common/autotest_common.sh@10 -- # set +x 00:23:00.371 10:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:00.371 10:15:30 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:23:00.371 10:15:30 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:23:00.371 10:15:30 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:00.371 10:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:00.371 10:15:30 -- common/autotest_common.sh@10 -- # set +x 00:23:00.371 10:15:30 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:00.371 10:15:30 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:00.371 10:15:30 -- host/mdns_discovery.sh@72 -- # xargs 00:23:00.371 10:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:00.371 10:15:30 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:23:00.371 10:15:30 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:23:00.371 10:15:30 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:00.371 10:15:30 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:00.371 10:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:00.371 10:15:30 -- common/autotest_common.sh@10 -- # set +x 00:23:00.371 10:15:30 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:00.371 10:15:30 -- host/mdns_discovery.sh@72 -- # xargs 00:23:00.371 10:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:00.371 10:15:30 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:23:00.371 10:15:30 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:23:00.371 10:15:30 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:00.371 10:15:30 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:00.371 10:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:00.371 10:15:30 -- common/autotest_common.sh@10 -- # set +x 00:23:00.629 10:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:00.629 10:15:30 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:00.629 10:15:30 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:23:00.629 10:15:30 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:23:00.629 10:15:30 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:00.629 10:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:00.629 10:15:30 -- common/autotest_common.sh@10 -- # set +x 00:23:00.629 10:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:00.629 10:15:30 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:23:00.629 10:15:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:00.629 10:15:30 -- common/autotest_common.sh@10 -- # set +x 00:23:00.629 10:15:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:00.629 10:15:30 -- host/mdns_discovery.sh@139 -- # sleep 1 00:23:01.566 10:15:31 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:23:01.566 10:15:31 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:01.566 10:15:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:01.566 10:15:31 -- common/autotest_common.sh@10 -- # set +x 00:23:01.566 10:15:31 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:01.566 10:15:31 -- host/mdns_discovery.sh@64 -- # sort 00:23:01.566 10:15:31 -- host/mdns_discovery.sh@64 -- # xargs 00:23:01.566 10:15:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:01.566 10:15:32 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:01.566 10:15:32 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:23:01.566 10:15:32 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:01.566 10:15:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:01.566 10:15:32 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:01.566 10:15:32 -- common/autotest_common.sh@10 -- # set +x 00:23:01.566 10:15:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:01.566 10:15:32 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:23:01.566 10:15:32 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:01.566 10:15:32 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:23:01.566 10:15:32 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:01.566 10:15:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:01.566 10:15:32 -- common/autotest_common.sh@10 -- # set +x 00:23:01.566 [2024-07-14 10:15:32.103137] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:01.566 [2024-07-14 10:15:32.104146] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:01.566 [2024-07-14 10:15:32.104197] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:01.566 [2024-07-14 10:15:32.104284] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:01.566 [2024-07-14 10:15:32.104300] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:01.566 10:15:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:01.566 10:15:32 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:23:01.566 10:15:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:01.566 10:15:32 -- common/autotest_common.sh@10 -- # set +x 00:23:01.566 [2024-07-14 10:15:32.111048] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:01.566 [2024-07-14 10:15:32.112136] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:01.566 [2024-07-14 10:15:32.112206] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:01.825 10:15:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:01.825 10:15:32 -- host/mdns_discovery.sh@149 -- # sleep 1 00:23:01.825 [2024-07-14 10:15:32.244257] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:23:01.825 [2024-07-14 10:15:32.246259] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:23:01.825 [2024-07-14 10:15:32.301595] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:01.825 [2024-07-14 10:15:32.301622] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:01.825 [2024-07-14 10:15:32.301644] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:01.825 [2024-07-14 10:15:32.301659] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:01.825 [2024-07-14 10:15:32.310475] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:01.825 [2024-07-14 10:15:32.310499] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:01.825 [2024-07-14 10:15:32.310521] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:01.825 [2024-07-14 10:15:32.310537] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:01.825 [2024-07-14 10:15:32.347381] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:01.825 [2024-07-14 10:15:32.347401] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:01.825 [2024-07-14 10:15:32.356395] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:23:01.825 [2024-07-14 10:15:32.356416] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:02.769 10:15:33 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:23:02.769 10:15:33 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:02.769 10:15:33 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:02.769 10:15:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:02.769 10:15:33 -- common/autotest_common.sh@10 -- # set +x 00:23:02.769 10:15:33 -- host/mdns_discovery.sh@68 -- # sort 00:23:02.769 10:15:33 -- host/mdns_discovery.sh@68 -- # xargs 00:23:02.769 10:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:02.769 10:15:33 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:02.769 10:15:33 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:23:02.769 10:15:33 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:02.769 10:15:33 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:02.769 10:15:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:02.769 10:15:33 -- common/autotest_common.sh@10 -- # set +x 00:23:02.769 10:15:33 -- host/mdns_discovery.sh@64 -- # sort 00:23:02.769 10:15:33 -- host/mdns_discovery.sh@64 -- # xargs 00:23:02.769 10:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:02.769 10:15:33 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:02.769 10:15:33 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:23:02.769 10:15:33 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:02.769 10:15:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:02.769 10:15:33 -- common/autotest_common.sh@10 -- # set +x 00:23:02.769 10:15:33 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:02.769 10:15:33 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:02.769 10:15:33 -- host/mdns_discovery.sh@72 -- # xargs 00:23:02.769 10:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:02.769 10:15:33 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:03.031 10:15:33 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:23:03.031 10:15:33 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:03.031 10:15:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:03.031 10:15:33 -- common/autotest_common.sh@10 -- # set +x 00:23:03.031 10:15:33 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:03.031 10:15:33 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:03.031 10:15:33 -- host/mdns_discovery.sh@72 -- # xargs 00:23:03.031 10:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:03.031 10:15:33 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:03.031 10:15:33 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:23:03.031 10:15:33 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:03.031 10:15:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:03.031 10:15:33 -- common/autotest_common.sh@10 -- # set +x 00:23:03.031 10:15:33 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:03.031 10:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:03.031 10:15:33 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:03.031 10:15:33 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:03.031 10:15:33 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:23:03.031 10:15:33 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:03.031 10:15:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:03.031 10:15:33 -- common/autotest_common.sh@10 -- # set +x 00:23:03.031 [2024-07-14 10:15:33.424950] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:03.031 [2024-07-14 10:15:33.425002] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:03.031 [2024-07-14 10:15:33.425036] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:03.031 [2024-07-14 10:15:33.425050] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:03.031 [2024-07-14 10:15:33.428383] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.031 [2024-07-14 10:15:33.428423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.031 [2024-07-14 10:15:33.428452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.031 [2024-07-14 10:15:33.428462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.031 [2024-07-14 10:15:33.428473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.031 [2024-07-14 10:15:33.428483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.031 [2024-07-14 10:15:33.428493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.031 [2024-07-14 10:15:33.428503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.031 [2024-07-14 10:15:33.428513] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11392c0 is same with the state(5) to be set 00:23:03.031 10:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:03.031 10:15:33 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:23:03.031 10:15:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:03.031 10:15:33 -- common/autotest_common.sh@10 -- # set +x 00:23:03.031 [2024-07-14 10:15:33.431937] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:03.031 [2024-07-14 10:15:33.432008] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:23:03.031 [2024-07-14 10:15:33.435714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.031 [2024-07-14 10:15:33.435759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.031 [2024-07-14 10:15:33.435787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.031 [2024-07-14 10:15:33.435796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.031 [2024-07-14 10:15:33.435806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.031 [2024-07-14 10:15:33.435814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.031 [2024-07-14 10:15:33.435824] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:03.031 [2024-07-14 10:15:33.435851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:03.031 [2024-07-14 10:15:33.435860] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf52760 is same with the state(5) to be set 00:23:03.031 10:15:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:03.031 10:15:33 -- host/mdns_discovery.sh@162 -- # sleep 1 00:23:03.031 [2024-07-14 10:15:33.438343] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11392c0 (9): Bad file descriptor 00:23:03.031 [2024-07-14 10:15:33.445639] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf52760 (9): Bad file descriptor 00:23:03.031 [2024-07-14 10:15:33.448359] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:03.031 [2024-07-14 10:15:33.448509] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.031 [2024-07-14 10:15:33.448559] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.031 [2024-07-14 10:15:33.448575] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11392c0 with addr=10.0.0.2, port=4420 00:23:03.031 [2024-07-14 10:15:33.448586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11392c0 is same with the state(5) to be set 00:23:03.031 [2024-07-14 10:15:33.448603] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11392c0 (9): Bad file descriptor 00:23:03.031 [2024-07-14 10:15:33.448634] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:03.031 [2024-07-14 10:15:33.448643] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:03.031 [2024-07-14 10:15:33.448653] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:03.031 [2024-07-14 10:15:33.448669] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:03.031 [2024-07-14 10:15:33.455650] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:03.031 [2024-07-14 10:15:33.455764] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.031 [2024-07-14 10:15:33.455806] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.031 [2024-07-14 10:15:33.455837] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf52760 with addr=10.0.0.3, port=4420 00:23:03.031 [2024-07-14 10:15:33.455847] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf52760 is same with the state(5) to be set 00:23:03.031 [2024-07-14 10:15:33.455861] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf52760 (9): Bad file descriptor 00:23:03.031 [2024-07-14 10:15:33.455874] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:03.031 [2024-07-14 10:15:33.455882] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:03.031 [2024-07-14 10:15:33.455890] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:03.031 [2024-07-14 10:15:33.455904] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:03.031 [2024-07-14 10:15:33.458433] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:03.031 [2024-07-14 10:15:33.458539] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.031 [2024-07-14 10:15:33.458582] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.031 [2024-07-14 10:15:33.458612] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11392c0 with addr=10.0.0.2, port=4420 00:23:03.032 [2024-07-14 10:15:33.458621] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11392c0 is same with the state(5) to be set 00:23:03.032 [2024-07-14 10:15:33.458636] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11392c0 (9): Bad file descriptor 00:23:03.032 [2024-07-14 10:15:33.458649] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:03.032 [2024-07-14 10:15:33.458657] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:03.032 [2024-07-14 10:15:33.458666] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:03.032 [2024-07-14 10:15:33.458679] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:03.032 [2024-07-14 10:15:33.465731] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:03.032 [2024-07-14 10:15:33.465836] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.032 [2024-07-14 10:15:33.465876] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.032 [2024-07-14 10:15:33.465890] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf52760 with addr=10.0.0.3, port=4420 00:23:03.032 [2024-07-14 10:15:33.465899] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf52760 is same with the state(5) to be set 00:23:03.032 [2024-07-14 10:15:33.465913] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf52760 (9): Bad file descriptor 00:23:03.032 [2024-07-14 10:15:33.465925] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:03.032 [2024-07-14 10:15:33.465932] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:03.032 [2024-07-14 10:15:33.465940] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:03.032 [2024-07-14 10:15:33.465953] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:03.032 [2024-07-14 10:15:33.468493] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:03.032 [2024-07-14 10:15:33.468596] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.032 [2024-07-14 10:15:33.468635] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.032 [2024-07-14 10:15:33.468649] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11392c0 with addr=10.0.0.2, port=4420 00:23:03.032 [2024-07-14 10:15:33.468658] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11392c0 is same with the state(5) to be set 00:23:03.032 [2024-07-14 10:15:33.468671] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11392c0 (9): Bad file descriptor 00:23:03.032 [2024-07-14 10:15:33.468684] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:03.032 [2024-07-14 10:15:33.468691] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:03.032 [2024-07-14 10:15:33.468699] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:03.032 [2024-07-14 10:15:33.468712] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:03.032 [2024-07-14 10:15:33.475794] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:03.032 [2024-07-14 10:15:33.475899] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.032 [2024-07-14 10:15:33.475939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.032 [2024-07-14 10:15:33.475953] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf52760 with addr=10.0.0.3, port=4420 00:23:03.032 [2024-07-14 10:15:33.475961] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf52760 is same with the state(5) to be set 00:23:03.032 [2024-07-14 10:15:33.475975] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf52760 (9): Bad file descriptor 00:23:03.032 [2024-07-14 10:15:33.476003] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:03.032 [2024-07-14 10:15:33.476012] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:03.032 [2024-07-14 10:15:33.476020] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:03.032 [2024-07-14 10:15:33.476032] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:03.032 [2024-07-14 10:15:33.478573] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:03.032 [2024-07-14 10:15:33.478667] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.032 [2024-07-14 10:15:33.478709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.032 [2024-07-14 10:15:33.478724] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11392c0 with addr=10.0.0.2, port=4420 00:23:03.032 [2024-07-14 10:15:33.478733] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11392c0 is same with the state(5) to be set 00:23:03.032 [2024-07-14 10:15:33.478748] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11392c0 (9): Bad file descriptor 00:23:03.032 [2024-07-14 10:15:33.478762] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:03.032 [2024-07-14 10:15:33.478770] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:03.032 [2024-07-14 10:15:33.478794] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:03.032 [2024-07-14 10:15:33.478834] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:03.032 [2024-07-14 10:15:33.485858] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:03.032 [2024-07-14 10:15:33.485965] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.032 [2024-07-14 10:15:33.486005] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.032 [2024-07-14 10:15:33.486019] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf52760 with addr=10.0.0.3, port=4420 00:23:03.032 [2024-07-14 10:15:33.486029] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf52760 is same with the state(5) to be set 00:23:03.032 [2024-07-14 10:15:33.486043] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf52760 (9): Bad file descriptor 00:23:03.032 [2024-07-14 10:15:33.486069] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:03.032 [2024-07-14 10:15:33.486078] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:03.032 [2024-07-14 10:15:33.486086] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:03.032 [2024-07-14 10:15:33.486098] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:03.032 [2024-07-14 10:15:33.488621] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:03.032 [2024-07-14 10:15:33.488723] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.032 [2024-07-14 10:15:33.488763] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.032 [2024-07-14 10:15:33.488776] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11392c0 with addr=10.0.0.2, port=4420 00:23:03.032 [2024-07-14 10:15:33.488785] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11392c0 is same with the state(5) to be set 00:23:03.032 [2024-07-14 10:15:33.488799] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11392c0 (9): Bad file descriptor 00:23:03.032 [2024-07-14 10:15:33.488811] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:03.032 [2024-07-14 10:15:33.488819] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:03.032 [2024-07-14 10:15:33.488827] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:03.032 [2024-07-14 10:15:33.488839] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:03.032 [2024-07-14 10:15:33.495924] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:03.032 [2024-07-14 10:15:33.496031] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.032 [2024-07-14 10:15:33.496071] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.032 [2024-07-14 10:15:33.496085] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf52760 with addr=10.0.0.3, port=4420 00:23:03.032 [2024-07-14 10:15:33.496094] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf52760 is same with the state(5) to be set 00:23:03.032 [2024-07-14 10:15:33.496107] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf52760 (9): Bad file descriptor 00:23:03.032 [2024-07-14 10:15:33.496134] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:03.032 [2024-07-14 10:15:33.496143] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:03.032 [2024-07-14 10:15:33.496151] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:03.032 [2024-07-14 10:15:33.496163] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:03.032 [2024-07-14 10:15:33.498698] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:03.032 [2024-07-14 10:15:33.498809] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.032 [2024-07-14 10:15:33.498853] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.032 [2024-07-14 10:15:33.498867] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11392c0 with addr=10.0.0.2, port=4420 00:23:03.032 [2024-07-14 10:15:33.498876] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11392c0 is same with the state(5) to be set 00:23:03.032 [2024-07-14 10:15:33.498891] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11392c0 (9): Bad file descriptor 00:23:03.032 [2024-07-14 10:15:33.498904] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:03.032 [2024-07-14 10:15:33.498912] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:03.032 [2024-07-14 10:15:33.498920] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:03.032 [2024-07-14 10:15:33.498932] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:03.032 [2024-07-14 10:15:33.505991] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:03.032 [2024-07-14 10:15:33.506095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.032 [2024-07-14 10:15:33.506136] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.032 [2024-07-14 10:15:33.506155] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf52760 with addr=10.0.0.3, port=4420 00:23:03.032 [2024-07-14 10:15:33.506164] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf52760 is same with the state(5) to be set 00:23:03.032 [2024-07-14 10:15:33.506178] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf52760 (9): Bad file descriptor 00:23:03.032 [2024-07-14 10:15:33.506205] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:03.032 [2024-07-14 10:15:33.506225] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:03.032 [2024-07-14 10:15:33.506235] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:03.032 [2024-07-14 10:15:33.506248] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:03.032 [2024-07-14 10:15:33.508758] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:03.032 [2024-07-14 10:15:33.508861] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.032 [2024-07-14 10:15:33.508902] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.033 [2024-07-14 10:15:33.508916] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11392c0 with addr=10.0.0.2, port=4420 00:23:03.033 [2024-07-14 10:15:33.508925] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11392c0 is same with the state(5) to be set 00:23:03.033 [2024-07-14 10:15:33.508940] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11392c0 (9): Bad file descriptor 00:23:03.033 [2024-07-14 10:15:33.508954] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:03.033 [2024-07-14 10:15:33.508962] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:03.033 [2024-07-14 10:15:33.508970] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:03.033 [2024-07-14 10:15:33.508983] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:03.033 [2024-07-14 10:15:33.516070] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:03.033 [2024-07-14 10:15:33.516175] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.033 [2024-07-14 10:15:33.516225] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.033 [2024-07-14 10:15:33.516241] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf52760 with addr=10.0.0.3, port=4420 00:23:03.033 [2024-07-14 10:15:33.516251] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf52760 is same with the state(5) to be set 00:23:03.033 [2024-07-14 10:15:33.516265] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf52760 (9): Bad file descriptor 00:23:03.033 [2024-07-14 10:15:33.516292] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:03.033 [2024-07-14 10:15:33.516301] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:03.033 [2024-07-14 10:15:33.516309] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:03.033 [2024-07-14 10:15:33.516321] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:03.033 [2024-07-14 10:15:33.518841] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:03.033 [2024-07-14 10:15:33.518931] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.033 [2024-07-14 10:15:33.518973] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.033 [2024-07-14 10:15:33.518987] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11392c0 with addr=10.0.0.2, port=4420 00:23:03.033 [2024-07-14 10:15:33.518996] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11392c0 is same with the state(5) to be set 00:23:03.033 [2024-07-14 10:15:33.519011] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11392c0 (9): Bad file descriptor 00:23:03.033 [2024-07-14 10:15:33.519024] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:03.033 [2024-07-14 10:15:33.519032] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:03.033 [2024-07-14 10:15:33.519040] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:03.033 [2024-07-14 10:15:33.519053] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:03.033 [2024-07-14 10:15:33.526136] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:03.033 [2024-07-14 10:15:33.526257] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.033 [2024-07-14 10:15:33.526299] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.033 [2024-07-14 10:15:33.526313] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf52760 with addr=10.0.0.3, port=4420 00:23:03.033 [2024-07-14 10:15:33.526322] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf52760 is same with the state(5) to be set 00:23:03.033 [2024-07-14 10:15:33.526337] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf52760 (9): Bad file descriptor 00:23:03.033 [2024-07-14 10:15:33.526365] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:03.033 [2024-07-14 10:15:33.526374] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:03.033 [2024-07-14 10:15:33.526382] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:03.033 [2024-07-14 10:15:33.526395] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:03.033 [2024-07-14 10:15:33.528888] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:03.033 [2024-07-14 10:15:33.528993] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.033 [2024-07-14 10:15:33.529033] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.033 [2024-07-14 10:15:33.529048] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11392c0 with addr=10.0.0.2, port=4420 00:23:03.033 [2024-07-14 10:15:33.529057] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11392c0 is same with the state(5) to be set 00:23:03.033 [2024-07-14 10:15:33.529071] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11392c0 (9): Bad file descriptor 00:23:03.033 [2024-07-14 10:15:33.529084] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:03.033 [2024-07-14 10:15:33.529092] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:03.033 [2024-07-14 10:15:33.529100] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:03.033 [2024-07-14 10:15:33.529112] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:03.033 [2024-07-14 10:15:33.536220] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:03.033 [2024-07-14 10:15:33.536328] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.033 [2024-07-14 10:15:33.536368] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.033 [2024-07-14 10:15:33.536382] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf52760 with addr=10.0.0.3, port=4420 00:23:03.033 [2024-07-14 10:15:33.536391] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf52760 is same with the state(5) to be set 00:23:03.033 [2024-07-14 10:15:33.536405] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf52760 (9): Bad file descriptor 00:23:03.033 [2024-07-14 10:15:33.536440] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:03.033 [2024-07-14 10:15:33.536450] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:03.033 [2024-07-14 10:15:33.536458] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:03.033 [2024-07-14 10:15:33.536470] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:03.033 [2024-07-14 10:15:33.538953] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:03.033 [2024-07-14 10:15:33.539041] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.033 [2024-07-14 10:15:33.539081] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.033 [2024-07-14 10:15:33.539095] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11392c0 with addr=10.0.0.2, port=4420 00:23:03.033 [2024-07-14 10:15:33.539104] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11392c0 is same with the state(5) to be set 00:23:03.033 [2024-07-14 10:15:33.539118] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11392c0 (9): Bad file descriptor 00:23:03.033 [2024-07-14 10:15:33.539145] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:03.033 [2024-07-14 10:15:33.539153] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:03.033 [2024-07-14 10:15:33.539161] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:03.033 [2024-07-14 10:15:33.539173] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:03.033 [2024-07-14 10:15:33.546302] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:03.033 [2024-07-14 10:15:33.546389] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.033 [2024-07-14 10:15:33.546429] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.033 [2024-07-14 10:15:33.546442] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf52760 with addr=10.0.0.3, port=4420 00:23:03.033 [2024-07-14 10:15:33.546451] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf52760 is same with the state(5) to be set 00:23:03.033 [2024-07-14 10:15:33.546465] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf52760 (9): Bad file descriptor 00:23:03.033 [2024-07-14 10:15:33.546525] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:03.033 [2024-07-14 10:15:33.546550] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:03.033 [2024-07-14 10:15:33.546559] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:03.033 [2024-07-14 10:15:33.546574] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:03.033 [2024-07-14 10:15:33.549000] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:03.033 [2024-07-14 10:15:33.549104] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.033 [2024-07-14 10:15:33.549145] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.033 [2024-07-14 10:15:33.549159] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11392c0 with addr=10.0.0.2, port=4420 00:23:03.033 [2024-07-14 10:15:33.549168] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11392c0 is same with the state(5) to be set 00:23:03.033 [2024-07-14 10:15:33.549182] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11392c0 (9): Bad file descriptor 00:23:03.033 [2024-07-14 10:15:33.549195] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:03.033 [2024-07-14 10:15:33.549203] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:03.033 [2024-07-14 10:15:33.549212] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:03.033 [2024-07-14 10:15:33.549237] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:03.033 [2024-07-14 10:15:33.556364] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:23:03.033 [2024-07-14 10:15:33.556469] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.033 [2024-07-14 10:15:33.556508] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.033 [2024-07-14 10:15:33.556522] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf52760 with addr=10.0.0.3, port=4420 00:23:03.033 [2024-07-14 10:15:33.556531] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf52760 is same with the state(5) to be set 00:23:03.033 [2024-07-14 10:15:33.556554] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf52760 (9): Bad file descriptor 00:23:03.033 [2024-07-14 10:15:33.556581] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:23:03.033 [2024-07-14 10:15:33.556589] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:23:03.033 [2024-07-14 10:15:33.556597] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:23:03.033 [2024-07-14 10:15:33.556610] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:03.033 [2024-07-14 10:15:33.559063] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:03.033 [2024-07-14 10:15:33.559182] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.033 [2024-07-14 10:15:33.559221] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:03.034 [2024-07-14 10:15:33.559234] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x11392c0 with addr=10.0.0.2, port=4420 00:23:03.034 [2024-07-14 10:15:33.559255] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11392c0 is same with the state(5) to be set 00:23:03.034 [2024-07-14 10:15:33.559271] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11392c0 (9): Bad file descriptor 00:23:03.034 [2024-07-14 10:15:33.559284] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:03.034 [2024-07-14 10:15:33.559292] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:03.034 [2024-07-14 10:15:33.559300] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:03.034 [2024-07-14 10:15:33.559312] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:03.034 [2024-07-14 10:15:33.563598] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:03.034 [2024-07-14 10:15:33.563643] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:03.034 [2024-07-14 10:15:33.563660] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:03.034 [2024-07-14 10:15:33.563690] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:23:03.034 [2024-07-14 10:15:33.563703] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:03.034 [2024-07-14 10:15:33.563716] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:03.293 [2024-07-14 10:15:33.649671] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:03.293 [2024-07-14 10:15:33.649737] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:04.229 10:15:34 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:23:04.229 10:15:34 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:04.229 10:15:34 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:04.229 10:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:04.229 10:15:34 -- common/autotest_common.sh@10 -- # set +x 00:23:04.229 10:15:34 -- host/mdns_discovery.sh@68 -- # sort 00:23:04.229 10:15:34 -- host/mdns_discovery.sh@68 -- # xargs 00:23:04.229 10:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:04.229 10:15:34 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:23:04.229 10:15:34 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:23:04.229 10:15:34 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:04.229 10:15:34 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:04.229 10:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:04.229 10:15:34 -- common/autotest_common.sh@10 -- # set +x 00:23:04.229 10:15:34 -- host/mdns_discovery.sh@64 -- # sort 00:23:04.229 10:15:34 -- host/mdns_discovery.sh@64 -- # xargs 00:23:04.229 10:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:04.229 10:15:34 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:04.229 10:15:34 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:23:04.229 10:15:34 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:04.229 10:15:34 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:23:04.229 10:15:34 -- host/mdns_discovery.sh@72 -- # xargs 00:23:04.229 10:15:34 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:04.229 10:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:04.229 10:15:34 -- common/autotest_common.sh@10 -- # set +x 00:23:04.229 10:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:04.229 10:15:34 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:23:04.229 10:15:34 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:23:04.229 10:15:34 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:04.229 10:15:34 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:23:04.229 10:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:04.229 10:15:34 -- host/mdns_discovery.sh@72 -- # sort -n 00:23:04.229 10:15:34 -- common/autotest_common.sh@10 -- # set +x 00:23:04.229 10:15:34 -- host/mdns_discovery.sh@72 -- # xargs 00:23:04.229 10:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:04.229 10:15:34 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:23:04.229 10:15:34 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:23:04.229 10:15:34 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:04.229 10:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:04.229 10:15:34 -- common/autotest_common.sh@10 -- # set +x 00:23:04.229 10:15:34 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:04.229 10:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:04.229 10:15:34 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:23:04.229 10:15:34 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:23:04.229 10:15:34 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:23:04.229 10:15:34 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:04.229 10:15:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:04.229 10:15:34 -- common/autotest_common.sh@10 -- # set +x 00:23:04.229 10:15:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:04.229 10:15:34 -- host/mdns_discovery.sh@172 -- # sleep 1 00:23:04.487 [2024-07-14 10:15:34.789236] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:05.423 10:15:35 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:23:05.423 10:15:35 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:05.423 10:15:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:05.423 10:15:35 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:05.423 10:15:35 -- common/autotest_common.sh@10 -- # set +x 00:23:05.423 10:15:35 -- host/mdns_discovery.sh@80 -- # sort 00:23:05.423 10:15:35 -- host/mdns_discovery.sh@80 -- # xargs 00:23:05.423 10:15:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:05.423 10:15:35 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:23:05.423 10:15:35 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:23:05.423 10:15:35 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:05.423 10:15:35 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:23:05.423 10:15:35 -- host/mdns_discovery.sh@68 -- # xargs 00:23:05.423 10:15:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:05.423 10:15:35 -- host/mdns_discovery.sh@68 -- # sort 00:23:05.423 10:15:35 -- common/autotest_common.sh@10 -- # set +x 00:23:05.423 10:15:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:05.423 10:15:35 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:23:05.423 10:15:35 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:23:05.423 10:15:35 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:05.423 10:15:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:05.423 10:15:35 -- common/autotest_common.sh@10 -- # set +x 00:23:05.423 10:15:35 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:05.423 10:15:35 -- host/mdns_discovery.sh@64 -- # sort 00:23:05.423 10:15:35 -- host/mdns_discovery.sh@64 -- # xargs 00:23:05.423 10:15:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:05.423 10:15:35 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:23:05.423 10:15:35 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:23:05.423 10:15:35 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:23:05.423 10:15:35 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:23:05.423 10:15:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:05.423 10:15:35 -- common/autotest_common.sh@10 -- # set +x 00:23:05.423 10:15:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:05.423 10:15:35 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:23:05.423 10:15:35 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:23:05.423 10:15:35 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:23:05.423 10:15:35 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:05.423 10:15:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:05.423 10:15:35 -- common/autotest_common.sh@10 -- # set +x 00:23:05.423 10:15:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:05.423 10:15:35 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:05.423 10:15:35 -- common/autotest_common.sh@640 -- # local es=0 00:23:05.423 10:15:35 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:05.423 10:15:35 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:23:05.423 10:15:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:05.423 10:15:35 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:23:05.423 10:15:35 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:05.423 10:15:35 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:23:05.423 10:15:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:05.423 10:15:35 -- common/autotest_common.sh@10 -- # set +x 00:23:05.423 [2024-07-14 10:15:35.965160] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:23:05.423 2024/07/14 10:15:35 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:05.684 request: 00:23:05.684 { 00:23:05.684 "method": "bdev_nvme_start_mdns_discovery", 00:23:05.684 "params": { 00:23:05.684 "name": "mdns", 00:23:05.684 "svcname": "_nvme-disc._http", 00:23:05.684 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:05.684 } 00:23:05.684 } 00:23:05.684 Got JSON-RPC error response 00:23:05.684 GoRPCClient: error on JSON-RPC call 00:23:05.684 10:15:35 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:23:05.684 10:15:35 -- common/autotest_common.sh@643 -- # es=1 00:23:05.684 10:15:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:05.684 10:15:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:05.684 10:15:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:05.684 10:15:35 -- host/mdns_discovery.sh@183 -- # sleep 5 00:23:05.943 [2024-07-14 10:15:36.353883] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:23:05.943 [2024-07-14 10:15:36.453881] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:23:06.202 [2024-07-14 10:15:36.553885] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:06.202 [2024-07-14 10:15:36.553902] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.3) 00:23:06.202 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:06.202 cookie is 0 00:23:06.202 is_local: 1 00:23:06.202 our_own: 0 00:23:06.202 wide_area: 0 00:23:06.202 multicast: 1 00:23:06.202 cached: 1 00:23:06.202 [2024-07-14 10:15:36.653888] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:23:06.202 [2024-07-14 10:15:36.653907] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1716830599-074-updated-1705279005.local:8009 (10.0.0.2) 00:23:06.202 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:23:06.202 cookie is 0 00:23:06.202 is_local: 1 00:23:06.202 our_own: 0 00:23:06.202 wide_area: 0 00:23:06.202 multicast: 1 00:23:06.202 cached: 1 00:23:07.135 [2024-07-14 10:15:37.559537] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:07.136 [2024-07-14 10:15:37.559560] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:07.136 [2024-07-14 10:15:37.559578] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:07.136 [2024-07-14 10:15:37.645649] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:23:07.136 [2024-07-14 10:15:37.659359] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:07.136 [2024-07-14 10:15:37.659377] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:07.136 [2024-07-14 10:15:37.659392] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:07.393 [2024-07-14 10:15:37.707351] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:23:07.393 [2024-07-14 10:15:37.707373] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:23:07.393 [2024-07-14 10:15:37.745551] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:23:07.393 [2024-07-14 10:15:37.804187] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:23:07.393 [2024-07-14 10:15:37.804219] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:10.678 10:15:40 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:23:10.679 10:15:40 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:23:10.679 10:15:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:10.679 10:15:40 -- common/autotest_common.sh@10 -- # set +x 00:23:10.679 10:15:40 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:23:10.679 10:15:40 -- host/mdns_discovery.sh@80 -- # sort 00:23:10.679 10:15:40 -- host/mdns_discovery.sh@80 -- # xargs 00:23:10.679 10:15:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:10.679 10:15:41 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:23:10.679 10:15:41 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:23:10.679 10:15:41 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:10.679 10:15:41 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:10.679 10:15:41 -- host/mdns_discovery.sh@76 -- # xargs 00:23:10.679 10:15:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:10.679 10:15:41 -- host/mdns_discovery.sh@76 -- # sort 00:23:10.679 10:15:41 -- common/autotest_common.sh@10 -- # set +x 00:23:10.679 10:15:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:10.679 10:15:41 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:10.679 10:15:41 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:23:10.679 10:15:41 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:10.679 10:15:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:10.679 10:15:41 -- common/autotest_common.sh@10 -- # set +x 00:23:10.679 10:15:41 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:10.679 10:15:41 -- host/mdns_discovery.sh@64 -- # sort 00:23:10.679 10:15:41 -- host/mdns_discovery.sh@64 -- # xargs 00:23:10.679 10:15:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:10.679 10:15:41 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:10.679 10:15:41 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:10.679 10:15:41 -- common/autotest_common.sh@640 -- # local es=0 00:23:10.679 10:15:41 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:10.679 10:15:41 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:23:10.679 10:15:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:10.679 10:15:41 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:23:10.679 10:15:41 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:23:10.679 10:15:41 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:23:10.679 10:15:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:10.679 10:15:41 -- common/autotest_common.sh@10 -- # set +x 00:23:10.679 [2024-07-14 10:15:41.157298] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:23:10.679 2024/07/14 10:15:41 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:23:10.679 request: 00:23:10.679 { 00:23:10.679 "method": "bdev_nvme_start_mdns_discovery", 00:23:10.679 "params": { 00:23:10.679 "name": "cdc", 00:23:10.679 "svcname": "_nvme-disc._tcp", 00:23:10.679 "hostnqn": "nqn.2021-12.io.spdk:test" 00:23:10.679 } 00:23:10.679 } 00:23:10.679 Got JSON-RPC error response 00:23:10.679 GoRPCClient: error on JSON-RPC call 00:23:10.679 10:15:41 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:23:10.679 10:15:41 -- common/autotest_common.sh@643 -- # es=1 00:23:10.679 10:15:41 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:23:10.679 10:15:41 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:23:10.679 10:15:41 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:23:10.679 10:15:41 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:23:10.679 10:15:41 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:10.679 10:15:41 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:23:10.679 10:15:41 -- host/mdns_discovery.sh@76 -- # sort 00:23:10.679 10:15:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:10.679 10:15:41 -- common/autotest_common.sh@10 -- # set +x 00:23:10.679 10:15:41 -- host/mdns_discovery.sh@76 -- # xargs 00:23:10.679 10:15:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:10.679 10:15:41 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:23:10.679 10:15:41 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:23:10.938 10:15:41 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:10.938 10:15:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:10.938 10:15:41 -- common/autotest_common.sh@10 -- # set +x 00:23:10.938 10:15:41 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:23:10.938 10:15:41 -- host/mdns_discovery.sh@64 -- # sort 00:23:10.938 10:15:41 -- host/mdns_discovery.sh@64 -- # xargs 00:23:10.938 10:15:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:10.938 10:15:41 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:23:10.938 10:15:41 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:23:10.938 10:15:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:23:10.938 10:15:41 -- common/autotest_common.sh@10 -- # set +x 00:23:10.938 10:15:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:23:10.938 10:15:41 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:23:10.938 10:15:41 -- host/mdns_discovery.sh@197 -- # kill 97629 00:23:10.938 10:15:41 -- host/mdns_discovery.sh@200 -- # wait 97629 00:23:10.938 [2024-07-14 10:15:41.388430] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:23:10.938 10:15:41 -- host/mdns_discovery.sh@201 -- # kill 97709 00:23:10.938 Got SIGTERM, quitting. 00:23:10.938 10:15:41 -- host/mdns_discovery.sh@202 -- # kill 97658 00:23:10.938 10:15:41 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:23:10.938 10:15:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:10.938 10:15:41 -- nvmf/common.sh@116 -- # sync 00:23:10.938 Got SIGTERM, quitting. 00:23:10.938 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:23:10.938 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:23:10.938 avahi-daemon 0.8 exiting. 00:23:11.197 10:15:41 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:23:11.197 10:15:41 -- nvmf/common.sh@119 -- # set +e 00:23:11.197 10:15:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:11.197 10:15:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:23:11.197 rmmod nvme_tcp 00:23:11.197 rmmod nvme_fabrics 00:23:11.197 rmmod nvme_keyring 00:23:11.197 10:15:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:11.197 10:15:41 -- nvmf/common.sh@123 -- # set -e 00:23:11.197 10:15:41 -- nvmf/common.sh@124 -- # return 0 00:23:11.197 10:15:41 -- nvmf/common.sh@477 -- # '[' -n 97579 ']' 00:23:11.197 10:15:41 -- nvmf/common.sh@478 -- # killprocess 97579 00:23:11.197 10:15:41 -- common/autotest_common.sh@926 -- # '[' -z 97579 ']' 00:23:11.197 10:15:41 -- common/autotest_common.sh@930 -- # kill -0 97579 00:23:11.197 10:15:41 -- common/autotest_common.sh@931 -- # uname 00:23:11.197 10:15:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:23:11.197 10:15:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 97579 00:23:11.197 10:15:41 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:23:11.197 10:15:41 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:23:11.197 killing process with pid 97579 00:23:11.197 10:15:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 97579' 00:23:11.197 10:15:41 -- common/autotest_common.sh@945 -- # kill 97579 00:23:11.197 10:15:41 -- common/autotest_common.sh@950 -- # wait 97579 00:23:11.462 10:15:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:11.462 10:15:41 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:23:11.462 10:15:41 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:23:11.462 10:15:41 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:11.462 10:15:41 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:23:11.462 10:15:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.462 10:15:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:11.462 10:15:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.463 10:15:41 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:23:11.463 ************************************ 00:23:11.463 END TEST nvmf_mdns_discovery 00:23:11.463 ************************************ 00:23:11.463 00:23:11.463 real 0m20.512s 00:23:11.463 user 0m40.269s 00:23:11.463 sys 0m1.979s 00:23:11.463 10:15:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:11.463 10:15:41 -- common/autotest_common.sh@10 -- # set +x 00:23:11.463 10:15:41 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:23:11.463 10:15:41 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:11.463 10:15:41 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:23:11.463 10:15:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:23:11.463 10:15:41 -- common/autotest_common.sh@10 -- # set +x 00:23:11.463 ************************************ 00:23:11.463 START TEST nvmf_multipath 00:23:11.463 ************************************ 00:23:11.463 10:15:41 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:23:11.463 * Looking for test storage... 00:23:11.463 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:11.463 10:15:41 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:11.463 10:15:41 -- nvmf/common.sh@7 -- # uname -s 00:23:11.463 10:15:41 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.463 10:15:41 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.463 10:15:41 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.463 10:15:41 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.463 10:15:41 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.463 10:15:41 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.463 10:15:41 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.463 10:15:41 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.463 10:15:41 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.463 10:15:41 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.463 10:15:41 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:23:11.463 10:15:41 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:23:11.463 10:15:41 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.463 10:15:41 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.463 10:15:41 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:11.463 10:15:41 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:11.463 10:15:41 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.463 10:15:41 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.463 10:15:41 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.463 10:15:41 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.463 10:15:41 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.463 10:15:41 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.463 10:15:41 -- paths/export.sh@5 -- # export PATH 00:23:11.463 10:15:41 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.463 10:15:41 -- nvmf/common.sh@46 -- # : 0 00:23:11.463 10:15:41 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:11.463 10:15:41 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:11.463 10:15:41 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:11.463 10:15:41 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.463 10:15:41 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.463 10:15:41 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:11.463 10:15:41 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:11.463 10:15:41 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:11.463 10:15:41 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:11.463 10:15:41 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:11.463 10:15:41 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:11.463 10:15:41 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:23:11.463 10:15:41 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:11.463 10:15:41 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:11.463 10:15:41 -- host/multipath.sh@30 -- # nvmftestinit 00:23:11.463 10:15:41 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:23:11.463 10:15:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:11.463 10:15:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:11.463 10:15:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:11.463 10:15:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:11.463 10:15:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.463 10:15:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:11.463 10:15:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.463 10:15:41 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:23:11.463 10:15:41 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:23:11.463 10:15:41 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:23:11.463 10:15:41 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:23:11.463 10:15:41 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:23:11.463 10:15:41 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:23:11.463 10:15:41 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:11.463 10:15:41 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:11.463 10:15:41 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:11.463 10:15:41 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:23:11.463 10:15:41 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:11.463 10:15:41 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:11.463 10:15:41 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:11.463 10:15:41 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:11.463 10:15:41 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:11.463 10:15:41 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:11.463 10:15:41 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:11.463 10:15:41 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:11.463 10:15:41 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:23:11.463 10:15:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:23:11.782 Cannot find device "nvmf_tgt_br" 00:23:11.782 10:15:42 -- nvmf/common.sh@154 -- # true 00:23:11.782 10:15:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:23:11.782 Cannot find device "nvmf_tgt_br2" 00:23:11.782 10:15:42 -- nvmf/common.sh@155 -- # true 00:23:11.782 10:15:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:23:11.782 10:15:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:23:11.783 Cannot find device "nvmf_tgt_br" 00:23:11.783 10:15:42 -- nvmf/common.sh@157 -- # true 00:23:11.783 10:15:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:23:11.783 Cannot find device "nvmf_tgt_br2" 00:23:11.783 10:15:42 -- nvmf/common.sh@158 -- # true 00:23:11.783 10:15:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:23:11.783 10:15:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:23:11.783 10:15:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:11.783 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:11.783 10:15:42 -- nvmf/common.sh@161 -- # true 00:23:11.783 10:15:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:11.783 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:11.783 10:15:42 -- nvmf/common.sh@162 -- # true 00:23:11.783 10:15:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:23:11.783 10:15:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:11.783 10:15:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:11.783 10:15:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:11.783 10:15:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:11.783 10:15:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:11.783 10:15:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:11.783 10:15:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:11.783 10:15:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:11.783 10:15:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:23:11.783 10:15:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:23:11.783 10:15:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:23:11.783 10:15:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:23:11.783 10:15:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:11.783 10:15:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:11.783 10:15:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:11.783 10:15:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:23:11.783 10:15:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:23:11.783 10:15:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:23:11.783 10:15:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:11.783 10:15:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:12.045 10:15:42 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:12.045 10:15:42 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:12.045 10:15:42 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:23:12.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:12.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.085 ms 00:23:12.045 00:23:12.045 --- 10.0.0.2 ping statistics --- 00:23:12.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.045 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:23:12.045 10:15:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:23:12.045 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:12.045 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:23:12.045 00:23:12.045 --- 10.0.0.3 ping statistics --- 00:23:12.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.045 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:23:12.045 10:15:42 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:12.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:12.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:23:12.045 00:23:12.045 --- 10.0.0.1 ping statistics --- 00:23:12.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:12.045 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:23:12.045 10:15:42 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:12.045 10:15:42 -- nvmf/common.sh@421 -- # return 0 00:23:12.045 10:15:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:12.045 10:15:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:12.045 10:15:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:23:12.045 10:15:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:23:12.045 10:15:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:12.045 10:15:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:23:12.045 10:15:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:23:12.045 10:15:42 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:23:12.045 10:15:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:12.045 10:15:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:23:12.045 10:15:42 -- common/autotest_common.sh@10 -- # set +x 00:23:12.045 10:15:42 -- nvmf/common.sh@469 -- # nvmfpid=98220 00:23:12.045 10:15:42 -- nvmf/common.sh@470 -- # waitforlisten 98220 00:23:12.045 10:15:42 -- common/autotest_common.sh@819 -- # '[' -z 98220 ']' 00:23:12.045 10:15:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:12.045 10:15:42 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:12.045 10:15:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:12.045 10:15:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:12.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:12.045 10:15:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:12.045 10:15:42 -- common/autotest_common.sh@10 -- # set +x 00:23:12.045 [2024-07-14 10:15:42.391567] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:23:12.045 [2024-07-14 10:15:42.391685] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:12.045 [2024-07-14 10:15:42.526550] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:12.304 [2024-07-14 10:15:42.596977] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:12.304 [2024-07-14 10:15:42.597139] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:12.304 [2024-07-14 10:15:42.597151] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:12.304 [2024-07-14 10:15:42.597160] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:12.304 [2024-07-14 10:15:42.597318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:12.304 [2024-07-14 10:15:42.597329] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:12.871 10:15:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:12.871 10:15:43 -- common/autotest_common.sh@852 -- # return 0 00:23:12.871 10:15:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:12.871 10:15:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:23:12.871 10:15:43 -- common/autotest_common.sh@10 -- # set +x 00:23:13.129 10:15:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:13.129 10:15:43 -- host/multipath.sh@33 -- # nvmfapp_pid=98220 00:23:13.129 10:15:43 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:13.387 [2024-07-14 10:15:43.677105] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:13.387 10:15:43 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:13.387 Malloc0 00:23:13.387 10:15:43 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:13.646 10:15:44 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:13.904 10:15:44 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:14.162 [2024-07-14 10:15:44.536706] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.162 10:15:44 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:14.421 [2024-07-14 10:15:44.724812] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:14.421 10:15:44 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:14.421 10:15:44 -- host/multipath.sh@44 -- # bdevperf_pid=98318 00:23:14.421 10:15:44 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:14.421 10:15:44 -- host/multipath.sh@47 -- # waitforlisten 98318 /var/tmp/bdevperf.sock 00:23:14.421 10:15:44 -- common/autotest_common.sh@819 -- # '[' -z 98318 ']' 00:23:14.421 10:15:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:14.421 10:15:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:23:14.421 10:15:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:14.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:14.421 10:15:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:23:14.421 10:15:44 -- common/autotest_common.sh@10 -- # set +x 00:23:15.358 10:15:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:23:15.358 10:15:45 -- common/autotest_common.sh@852 -- # return 0 00:23:15.358 10:15:45 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:15.358 10:15:45 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:23:15.926 Nvme0n1 00:23:15.926 10:15:46 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:16.184 Nvme0n1 00:23:16.185 10:15:46 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:16.185 10:15:46 -- host/multipath.sh@78 -- # sleep 1 00:23:17.121 10:15:47 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:23:17.121 10:15:47 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:17.379 10:15:47 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:17.638 10:15:48 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:23:17.638 10:15:48 -- host/multipath.sh@65 -- # dtrace_pid=98405 00:23:17.638 10:15:48 -- host/multipath.sh@66 -- # sleep 6 00:23:17.638 10:15:48 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98220 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:24.201 10:15:54 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:24.201 10:15:54 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:24.201 10:15:54 -- host/multipath.sh@67 -- # active_port=4421 00:23:24.201 10:15:54 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:24.201 Attaching 4 probes... 00:23:24.201 @path[10.0.0.2, 4421]: 21445 00:23:24.201 @path[10.0.0.2, 4421]: 21762 00:23:24.201 @path[10.0.0.2, 4421]: 21610 00:23:24.201 @path[10.0.0.2, 4421]: 21668 00:23:24.201 @path[10.0.0.2, 4421]: 22024 00:23:24.201 10:15:54 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:24.201 10:15:54 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:24.201 10:15:54 -- host/multipath.sh@69 -- # sed -n 1p 00:23:24.201 10:15:54 -- host/multipath.sh@69 -- # port=4421 00:23:24.201 10:15:54 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:24.201 10:15:54 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:24.201 10:15:54 -- host/multipath.sh@72 -- # kill 98405 00:23:24.201 10:15:54 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:24.201 10:15:54 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:23:24.201 10:15:54 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:24.201 10:15:54 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:24.459 10:15:54 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:23:24.459 10:15:54 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98220 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:24.459 10:15:54 -- host/multipath.sh@65 -- # dtrace_pid=98541 00:23:24.459 10:15:54 -- host/multipath.sh@66 -- # sleep 6 00:23:31.018 10:16:00 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:31.019 10:16:00 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:31.019 10:16:01 -- host/multipath.sh@67 -- # active_port=4420 00:23:31.019 10:16:01 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:31.019 Attaching 4 probes... 00:23:31.019 @path[10.0.0.2, 4420]: 21835 00:23:31.019 @path[10.0.0.2, 4420]: 22243 00:23:31.019 @path[10.0.0.2, 4420]: 22383 00:23:31.019 @path[10.0.0.2, 4420]: 22199 00:23:31.019 @path[10.0.0.2, 4420]: 22174 00:23:31.019 10:16:01 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:31.019 10:16:01 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:31.019 10:16:01 -- host/multipath.sh@69 -- # sed -n 1p 00:23:31.019 10:16:01 -- host/multipath.sh@69 -- # port=4420 00:23:31.019 10:16:01 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:31.019 10:16:01 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:31.019 10:16:01 -- host/multipath.sh@72 -- # kill 98541 00:23:31.019 10:16:01 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:31.019 10:16:01 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:23:31.019 10:16:01 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:31.019 10:16:01 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:31.276 10:16:01 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:23:31.276 10:16:01 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98220 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:31.276 10:16:01 -- host/multipath.sh@65 -- # dtrace_pid=98673 00:23:31.276 10:16:01 -- host/multipath.sh@66 -- # sleep 6 00:23:37.872 10:16:07 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:37.872 10:16:07 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:37.872 10:16:07 -- host/multipath.sh@67 -- # active_port=4421 00:23:37.872 10:16:07 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:37.872 Attaching 4 probes... 00:23:37.872 @path[10.0.0.2, 4421]: 16675 00:23:37.872 @path[10.0.0.2, 4421]: 21641 00:23:37.872 @path[10.0.0.2, 4421]: 21483 00:23:37.872 @path[10.0.0.2, 4421]: 21593 00:23:37.872 @path[10.0.0.2, 4421]: 21607 00:23:37.872 10:16:07 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:37.872 10:16:07 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:37.872 10:16:07 -- host/multipath.sh@69 -- # sed -n 1p 00:23:37.872 10:16:07 -- host/multipath.sh@69 -- # port=4421 00:23:37.872 10:16:07 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:37.872 10:16:07 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:37.872 10:16:07 -- host/multipath.sh@72 -- # kill 98673 00:23:37.872 10:16:07 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:37.872 10:16:07 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:23:37.872 10:16:07 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:37.872 10:16:08 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:37.872 10:16:08 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:23:37.872 10:16:08 -- host/multipath.sh@65 -- # dtrace_pid=98803 00:23:37.872 10:16:08 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98220 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:37.872 10:16:08 -- host/multipath.sh@66 -- # sleep 6 00:23:44.437 10:16:14 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:44.437 10:16:14 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:23:44.437 10:16:14 -- host/multipath.sh@67 -- # active_port= 00:23:44.437 10:16:14 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:44.437 Attaching 4 probes... 00:23:44.437 00:23:44.437 00:23:44.437 00:23:44.437 00:23:44.437 00:23:44.437 10:16:14 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:44.437 10:16:14 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:44.437 10:16:14 -- host/multipath.sh@69 -- # sed -n 1p 00:23:44.437 10:16:14 -- host/multipath.sh@69 -- # port= 00:23:44.437 10:16:14 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:23:44.437 10:16:14 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:23:44.437 10:16:14 -- host/multipath.sh@72 -- # kill 98803 00:23:44.437 10:16:14 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:44.437 10:16:14 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:23:44.437 10:16:14 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:44.437 10:16:14 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:44.721 10:16:15 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:23:44.721 10:16:15 -- host/multipath.sh@65 -- # dtrace_pid=98934 00:23:44.721 10:16:15 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98220 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:44.721 10:16:15 -- host/multipath.sh@66 -- # sleep 6 00:23:51.286 10:16:21 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:51.286 10:16:21 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:23:51.286 10:16:21 -- host/multipath.sh@67 -- # active_port=4421 00:23:51.286 10:16:21 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:51.286 Attaching 4 probes... 00:23:51.286 @path[10.0.0.2, 4421]: 21131 00:23:51.286 @path[10.0.0.2, 4421]: 21229 00:23:51.286 @path[10.0.0.2, 4421]: 21340 00:23:51.286 @path[10.0.0.2, 4421]: 21331 00:23:51.286 @path[10.0.0.2, 4421]: 21199 00:23:51.286 10:16:21 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:51.286 10:16:21 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:51.286 10:16:21 -- host/multipath.sh@69 -- # sed -n 1p 00:23:51.286 10:16:21 -- host/multipath.sh@69 -- # port=4421 00:23:51.286 10:16:21 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:23:51.286 10:16:21 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:23:51.286 10:16:21 -- host/multipath.sh@72 -- # kill 98934 00:23:51.286 10:16:21 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:51.286 10:16:21 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:51.286 [2024-07-14 10:16:21.544749] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.286 [2024-07-14 10:16:21.544854] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.286 [2024-07-14 10:16:21.544866] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.286 [2024-07-14 10:16:21.544875] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.286 [2024-07-14 10:16:21.544883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.286 [2024-07-14 10:16:21.544892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.286 [2024-07-14 10:16:21.544901] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.286 [2024-07-14 10:16:21.544909] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.286 [2024-07-14 10:16:21.544917] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.286 [2024-07-14 10:16:21.544924] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.286 [2024-07-14 10:16:21.544932] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.286 [2024-07-14 10:16:21.544941] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.286 [2024-07-14 10:16:21.544949] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.286 [2024-07-14 10:16:21.544957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.286 [2024-07-14 10:16:21.544964] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.286 [2024-07-14 10:16:21.544972] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.286 [2024-07-14 10:16:21.544980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.286 [2024-07-14 10:16:21.544988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.286 [2024-07-14 10:16:21.544995] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.286 [2024-07-14 10:16:21.545003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.286 [2024-07-14 10:16:21.545011] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.286 [2024-07-14 10:16:21.545018] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.286 [2024-07-14 10:16:21.545026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.286 [2024-07-14 10:16:21.545034] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.286 [2024-07-14 10:16:21.545042] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.286 [2024-07-14 10:16:21.545049] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.286 [2024-07-14 10:16:21.545073] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545108] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545131] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545139] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545147] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545155] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545200] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545230] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545240] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545279] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545296] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545313] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545322] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545365] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545374] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545382] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545391] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545400] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545408] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545417] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545425] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545433] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545442] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545450] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545459] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545467] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545476] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545486] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545494] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545525] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 [2024-07-14 10:16:21.545542] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1755d90 is same with the state(5) to be set 00:23:51.287 10:16:21 -- host/multipath.sh@101 -- # sleep 1 00:23:52.224 10:16:22 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:23:52.224 10:16:22 -- host/multipath.sh@65 -- # dtrace_pid=99064 00:23:52.224 10:16:22 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98220 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:23:52.224 10:16:22 -- host/multipath.sh@66 -- # sleep 6 00:23:58.786 10:16:28 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:23:58.787 10:16:28 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:23:58.787 10:16:28 -- host/multipath.sh@67 -- # active_port=4420 00:23:58.787 10:16:28 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:58.787 Attaching 4 probes... 00:23:58.787 @path[10.0.0.2, 4420]: 20358 00:23:58.787 @path[10.0.0.2, 4420]: 21052 00:23:58.787 @path[10.0.0.2, 4420]: 20935 00:23:58.787 @path[10.0.0.2, 4420]: 20892 00:23:58.787 @path[10.0.0.2, 4420]: 20926 00:23:58.787 10:16:28 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:23:58.787 10:16:28 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:23:58.787 10:16:28 -- host/multipath.sh@69 -- # sed -n 1p 00:23:58.787 10:16:28 -- host/multipath.sh@69 -- # port=4420 00:23:58.787 10:16:28 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:23:58.787 10:16:28 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:23:58.787 10:16:28 -- host/multipath.sh@72 -- # kill 99064 00:23:58.787 10:16:28 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:23:58.787 10:16:28 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:58.787 [2024-07-14 10:16:29.038490] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:58.787 10:16:29 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:58.787 10:16:29 -- host/multipath.sh@111 -- # sleep 6 00:24:05.344 10:16:35 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:24:05.344 10:16:35 -- host/multipath.sh@65 -- # dtrace_pid=99262 00:24:05.344 10:16:35 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 98220 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:24:05.344 10:16:35 -- host/multipath.sh@66 -- # sleep 6 00:24:11.921 10:16:41 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:24:11.921 10:16:41 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:24:11.921 10:16:41 -- host/multipath.sh@67 -- # active_port=4421 00:24:11.921 10:16:41 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:11.921 Attaching 4 probes... 00:24:11.921 @path[10.0.0.2, 4421]: 20742 00:24:11.921 @path[10.0.0.2, 4421]: 21047 00:24:11.921 @path[10.0.0.2, 4421]: 21109 00:24:11.921 @path[10.0.0.2, 4421]: 21144 00:24:11.921 @path[10.0.0.2, 4421]: 21232 00:24:11.921 10:16:41 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:24:11.921 10:16:41 -- host/multipath.sh@69 -- # sed -n 1p 00:24:11.921 10:16:41 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:24:11.921 10:16:41 -- host/multipath.sh@69 -- # port=4421 00:24:11.921 10:16:41 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:24:11.921 10:16:41 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:24:11.921 10:16:41 -- host/multipath.sh@72 -- # kill 99262 00:24:11.921 10:16:41 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:11.921 10:16:41 -- host/multipath.sh@114 -- # killprocess 98318 00:24:11.921 10:16:41 -- common/autotest_common.sh@926 -- # '[' -z 98318 ']' 00:24:11.921 10:16:41 -- common/autotest_common.sh@930 -- # kill -0 98318 00:24:11.921 10:16:41 -- common/autotest_common.sh@931 -- # uname 00:24:11.921 10:16:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:11.921 10:16:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 98318 00:24:11.921 killing process with pid 98318 00:24:11.921 10:16:41 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:11.921 10:16:41 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:11.921 10:16:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 98318' 00:24:11.921 10:16:41 -- common/autotest_common.sh@945 -- # kill 98318 00:24:11.921 10:16:41 -- common/autotest_common.sh@950 -- # wait 98318 00:24:11.921 Connection closed with partial response: 00:24:11.921 00:24:11.921 00:24:11.921 10:16:41 -- host/multipath.sh@116 -- # wait 98318 00:24:11.921 10:16:41 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:11.921 [2024-07-14 10:15:44.778313] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:11.921 [2024-07-14 10:15:44.778405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid98318 ] 00:24:11.921 [2024-07-14 10:15:44.914633] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:11.921 [2024-07-14 10:15:44.985671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:11.921 Running I/O for 90 seconds... 00:24:11.921 [2024-07-14 10:15:54.796834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:32456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.921 [2024-07-14 10:15:54.796902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:11.921 [2024-07-14 10:15:54.796972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:32464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.921 [2024-07-14 10:15:54.796993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:11.921 [2024-07-14 10:15:54.797015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:32472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.921 [2024-07-14 10:15:54.797031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:11.921 [2024-07-14 10:15:54.797052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:32480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.921 [2024-07-14 10:15:54.797067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:11.921 [2024-07-14 10:15:54.797087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:32488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.921 [2024-07-14 10:15:54.797102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:11.921 [2024-07-14 10:15:54.797122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:32496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.921 [2024-07-14 10:15:54.797136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:11.921 [2024-07-14 10:15:54.797157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:32504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.921 [2024-07-14 10:15:54.797171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:11.921 [2024-07-14 10:15:54.797192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:32512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.921 [2024-07-14 10:15:54.797222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:11.921 [2024-07-14 10:15:54.797276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:32520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.921 [2024-07-14 10:15:54.797295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:11.921 [2024-07-14 10:15:54.797318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:32528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.921 [2024-07-14 10:15:54.797335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:11.921 [2024-07-14 10:15:54.797358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:32536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.921 [2024-07-14 10:15:54.797397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.921 [2024-07-14 10:15:54.797423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:32544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.921 [2024-07-14 10:15:54.797440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:11.921 [2024-07-14 10:15:54.797463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:32552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.921 [2024-07-14 10:15:54.797479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:11.921 [2024-07-14 10:15:54.797501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.921 [2024-07-14 10:15:54.797527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:11.921 [2024-07-14 10:15:54.797552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:32568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.921 [2024-07-14 10:15:54.797585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:11.921 [2024-07-14 10:15:54.798543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:32576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.921 [2024-07-14 10:15:54.798588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:11.921 [2024-07-14 10:15:54.798645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:32584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.921 [2024-07-14 10:15:54.798662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:11.921 [2024-07-14 10:15:54.798683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:32592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.921 [2024-07-14 10:15:54.798698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:11.921 [2024-07-14 10:15:54.798718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:32600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.921 [2024-07-14 10:15:54.798732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:11.921 [2024-07-14 10:15:54.798752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:32608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.921 [2024-07-14 10:15:54.798767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:11.921 [2024-07-14 10:15:54.798787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:31984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.921 [2024-07-14 10:15:54.798830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.921 [2024-07-14 10:15:54.798855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.921 [2024-07-14 10:15:54.798872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:11.921 [2024-07-14 10:15:54.798895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:32000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.921 [2024-07-14 10:15:54.798911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:11.921 [2024-07-14 10:15:54.798948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:32008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.921 [2024-07-14 10:15:54.798967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.798990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:32040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.922 [2024-07-14 10:15:54.799007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.799030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.922 [2024-07-14 10:15:54.799047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.799069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:32096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.922 [2024-07-14 10:15:54.799086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.799124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:32112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.922 [2024-07-14 10:15:54.799163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.799183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:32616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.922 [2024-07-14 10:15:54.799209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.799246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:32624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.922 [2024-07-14 10:15:54.799264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.799302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:32632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.922 [2024-07-14 10:15:54.799320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.799343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:32640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.922 [2024-07-14 10:15:54.799360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.799382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:32648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.922 [2024-07-14 10:15:54.799399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.799422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:32656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.922 [2024-07-14 10:15:54.799438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.799461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:32664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.922 [2024-07-14 10:15:54.799478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.799509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:32672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.922 [2024-07-14 10:15:54.799527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.799550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:32680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.922 [2024-07-14 10:15:54.799597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.799640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:32688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.922 [2024-07-14 10:15:54.799656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.799677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:32696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.922 [2024-07-14 10:15:54.799691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.799711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:32704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.922 [2024-07-14 10:15:54.799726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.799746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:32712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.922 [2024-07-14 10:15:54.799760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.799781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:32720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.922 [2024-07-14 10:15:54.799795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.799816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:32728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.922 [2024-07-14 10:15:54.799831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.799851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:32736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.922 [2024-07-14 10:15:54.799865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.799886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:32744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.922 [2024-07-14 10:15:54.799900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.799920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:32144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.922 [2024-07-14 10:15:54.799935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.799955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:32200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.922 [2024-07-14 10:15:54.799970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.799990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:32208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.922 [2024-07-14 10:15:54.800012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.800034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:32224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.922 [2024-07-14 10:15:54.800049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.800069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:32232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.922 [2024-07-14 10:15:54.800084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.800104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:32240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.922 [2024-07-14 10:15:54.800119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.800139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:32256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.922 [2024-07-14 10:15:54.800153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.800173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.922 [2024-07-14 10:15:54.800188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.800247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:32752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.922 [2024-07-14 10:15:54.800264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.800298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.922 [2024-07-14 10:15:54.800316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.800339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:32768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.922 [2024-07-14 10:15:54.800364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.800387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:32776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.922 [2024-07-14 10:15:54.800403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.800426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:32784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.922 [2024-07-14 10:15:54.800442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.800465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.922 [2024-07-14 10:15:54.800481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.800504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:32800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.922 [2024-07-14 10:15:54.800528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.800551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:32808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.922 [2024-07-14 10:15:54.800583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.800634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:32816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.922 [2024-07-14 10:15:54.800649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.800670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:32824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.922 [2024-07-14 10:15:54.800685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.800705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:32832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.922 [2024-07-14 10:15:54.800720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.800741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:32840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.922 [2024-07-14 10:15:54.800755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:11.922 [2024-07-14 10:15:54.800775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.923 [2024-07-14 10:15:54.800790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.800811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:32856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.923 [2024-07-14 10:15:54.800825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.801597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:32864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.923 [2024-07-14 10:15:54.801641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.801684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:32872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.923 [2024-07-14 10:15:54.801702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.801725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.923 [2024-07-14 10:15:54.801741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.801762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:32888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.923 [2024-07-14 10:15:54.801778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.801800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.923 [2024-07-14 10:15:54.801815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.801848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.923 [2024-07-14 10:15:54.801865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.801886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:32912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.923 [2024-07-14 10:15:54.801902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.801924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:32920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.923 [2024-07-14 10:15:54.801939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.801960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:32928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.923 [2024-07-14 10:15:54.801976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.801998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:32936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.923 [2024-07-14 10:15:54.802014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.802035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:32944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.923 [2024-07-14 10:15:54.802051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.802073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:32952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.923 [2024-07-14 10:15:54.802088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.802110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:32960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.923 [2024-07-14 10:15:54.802126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.802147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:32968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.923 [2024-07-14 10:15:54.802163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.802184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:32976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.923 [2024-07-14 10:15:54.802199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.802252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:32984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.923 [2024-07-14 10:15:54.802284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.802309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:32992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.923 [2024-07-14 10:15:54.802327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.802358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:33000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.923 [2024-07-14 10:15:54.802376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.802399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:33008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.923 [2024-07-14 10:15:54.802416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.802439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.923 [2024-07-14 10:15:54.802455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.802478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:33024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.923 [2024-07-14 10:15:54.802494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.802517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:33032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.923 [2024-07-14 10:15:54.802533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.802571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:33040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.923 [2024-07-14 10:15:54.802602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.802623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:33048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.923 [2024-07-14 10:15:54.802639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.802660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.923 [2024-07-14 10:15:54.802676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.802698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:33064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.923 [2024-07-14 10:15:54.802714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.802735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:33072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.923 [2024-07-14 10:15:54.802750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.802779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:33080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.923 [2024-07-14 10:15:54.802822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.802847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:33088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.923 [2024-07-14 10:15:54.802863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.802885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:33096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.923 [2024-07-14 10:15:54.802909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.802933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:33104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.923 [2024-07-14 10:15:54.802949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.802971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:32280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.923 [2024-07-14 10:15:54.802987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.803009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.923 [2024-07-14 10:15:54.803025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.803047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:32320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.923 [2024-07-14 10:15:54.803063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.803085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:32336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.923 [2024-07-14 10:15:54.803101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.803134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:32344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.923 [2024-07-14 10:15:54.803149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.803171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:32392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.923 [2024-07-14 10:15:54.803186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.803219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:32400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.923 [2024-07-14 10:15:54.803235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:15:54.803270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:32440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.923 [2024-07-14 10:15:54.803287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:16:01.355608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:16032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.923 [2024-07-14 10:16:01.355678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:11.923 [2024-07-14 10:16:01.355748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.924 [2024-07-14 10:16:01.355770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.355792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.924 [2024-07-14 10:16:01.355824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.355847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.924 [2024-07-14 10:16:01.355863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.355883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.924 [2024-07-14 10:16:01.355898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.355918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.924 [2024-07-14 10:16:01.355932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.355952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.924 [2024-07-14 10:16:01.355967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.355986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:16088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.924 [2024-07-14 10:16:01.356001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.356021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.924 [2024-07-14 10:16:01.356035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.356326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.924 [2024-07-14 10:16:01.356355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.356385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.924 [2024-07-14 10:16:01.356403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.356427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.924 [2024-07-14 10:16:01.356444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.356468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.924 [2024-07-14 10:16:01.356485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.356509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:15368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.924 [2024-07-14 10:16:01.356525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.356549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.924 [2024-07-14 10:16:01.356565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.356643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:15384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.924 [2024-07-14 10:16:01.356659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.356679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.924 [2024-07-14 10:16:01.356694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.356716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.924 [2024-07-14 10:16:01.356732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.356753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:15448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.924 [2024-07-14 10:16:01.356767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.356787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.924 [2024-07-14 10:16:01.356802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.356823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:15496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.924 [2024-07-14 10:16:01.356837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.356857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:15504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.924 [2024-07-14 10:16:01.356871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.356892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:15512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.924 [2024-07-14 10:16:01.356906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.356926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:15528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.924 [2024-07-14 10:16:01.356941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.356962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.924 [2024-07-14 10:16:01.356976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.356996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.924 [2024-07-14 10:16:01.357011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.357031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.924 [2024-07-14 10:16:01.357046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.357073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.924 [2024-07-14 10:16:01.357089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.357314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.924 [2024-07-14 10:16:01.357339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.357365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.924 [2024-07-14 10:16:01.357383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.357407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.924 [2024-07-14 10:16:01.357424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.357448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:16152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.924 [2024-07-14 10:16:01.357465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.357490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.924 [2024-07-14 10:16:01.357506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.357530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.924 [2024-07-14 10:16:01.357548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.357572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:16176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.924 [2024-07-14 10:16:01.357604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.357641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:16184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.924 [2024-07-14 10:16:01.357655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.357676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:16192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.924 [2024-07-14 10:16:01.357691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.357712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:16200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.924 [2024-07-14 10:16:01.357727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.357829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:16208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.924 [2024-07-14 10:16:01.357850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.357886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.924 [2024-07-14 10:16:01.357903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.357927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.924 [2024-07-14 10:16:01.357942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.357965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:16232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.924 [2024-07-14 10:16:01.357979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.358002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.924 [2024-07-14 10:16:01.358017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:11.924 [2024-07-14 10:16:01.358040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:16248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.925 [2024-07-14 10:16:01.358055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.358077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.925 [2024-07-14 10:16:01.358091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.358114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.925 [2024-07-14 10:16:01.358129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.358151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.925 [2024-07-14 10:16:01.358167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.358189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.925 [2024-07-14 10:16:01.358204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.358274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.925 [2024-07-14 10:16:01.358294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.358322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:15728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.925 [2024-07-14 10:16:01.358339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.358365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.925 [2024-07-14 10:16:01.358381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.358408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:15760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.925 [2024-07-14 10:16:01.358432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.358460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.925 [2024-07-14 10:16:01.358477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.358504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.925 [2024-07-14 10:16:01.358521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.358547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:16272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.925 [2024-07-14 10:16:01.358563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.358618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.925 [2024-07-14 10:16:01.358648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.358671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.925 [2024-07-14 10:16:01.358685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.358708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.925 [2024-07-14 10:16:01.358722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.358745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.925 [2024-07-14 10:16:01.358759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.358782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.925 [2024-07-14 10:16:01.358824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.358853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:16320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.925 [2024-07-14 10:16:01.358870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.358896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.925 [2024-07-14 10:16:01.358913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.358939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:16336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.925 [2024-07-14 10:16:01.358956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.358989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:16344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.925 [2024-07-14 10:16:01.359013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.359041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.925 [2024-07-14 10:16:01.359058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.359084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:15800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.925 [2024-07-14 10:16:01.359101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.359150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:15808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.925 [2024-07-14 10:16:01.359165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.359188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.925 [2024-07-14 10:16:01.359203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.359254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.925 [2024-07-14 10:16:01.359285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.359314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:15880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.925 [2024-07-14 10:16:01.359331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.359358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:15896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.925 [2024-07-14 10:16:01.359375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.359401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.925 [2024-07-14 10:16:01.359418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.359444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.925 [2024-07-14 10:16:01.359460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.359487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.925 [2024-07-14 10:16:01.359504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.359530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.925 [2024-07-14 10:16:01.359546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.359573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.925 [2024-07-14 10:16:01.359604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.359651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.925 [2024-07-14 10:16:01.359667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:11.925 [2024-07-14 10:16:01.359691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.925 [2024-07-14 10:16:01.359706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.359731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.926 [2024-07-14 10:16:01.359752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.359780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:16408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.926 [2024-07-14 10:16:01.359796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.359835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.926 [2024-07-14 10:16:01.359850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.359877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.926 [2024-07-14 10:16:01.359892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.359915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.926 [2024-07-14 10:16:01.359930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.359953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:16440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.926 [2024-07-14 10:16:01.359968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.359991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:16448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.926 [2024-07-14 10:16:01.360006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.360029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.926 [2024-07-14 10:16:01.360044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.360067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:16464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.926 [2024-07-14 10:16:01.360081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.360104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:16472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.926 [2024-07-14 10:16:01.360120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.360152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.926 [2024-07-14 10:16:01.360168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.360191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.926 [2024-07-14 10:16:01.360206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.360262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.926 [2024-07-14 10:16:01.360289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.360316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.926 [2024-07-14 10:16:01.360332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.360357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:15952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.926 [2024-07-14 10:16:01.360373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.360397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.926 [2024-07-14 10:16:01.360413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.360438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.926 [2024-07-14 10:16:01.360459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.360490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.926 [2024-07-14 10:16:01.360507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.360532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:16480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.926 [2024-07-14 10:16:01.360548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.360573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:16488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.926 [2024-07-14 10:16:01.360619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.360642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:16496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.926 [2024-07-14 10:16:01.360657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.360680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.926 [2024-07-14 10:16:01.360695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.360718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.926 [2024-07-14 10:16:01.360740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.360764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.926 [2024-07-14 10:16:01.360779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.360802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.926 [2024-07-14 10:16:01.360817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.360840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.926 [2024-07-14 10:16:01.360855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.360878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.926 [2024-07-14 10:16:01.360893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.360916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:16552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.926 [2024-07-14 10:16:01.360930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.360954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:16560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.926 [2024-07-14 10:16:01.360969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.360992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.926 [2024-07-14 10:16:01.361007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.361030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.926 [2024-07-14 10:16:01.361045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.361068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.926 [2024-07-14 10:16:01.361083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.361106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:16592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.926 [2024-07-14 10:16:01.361125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.361150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:16600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.926 [2024-07-14 10:16:01.361166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.361189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.926 [2024-07-14 10:16:01.361226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.361262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.926 [2024-07-14 10:16:01.361280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.361520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:16624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.926 [2024-07-14 10:16:01.361545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.361592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.926 [2024-07-14 10:16:01.361608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.361637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:16640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.926 [2024-07-14 10:16:01.361652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.361680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.926 [2024-07-14 10:16:01.361696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.361725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:16656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.926 [2024-07-14 10:16:01.361740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:11.926 [2024-07-14 10:16:01.361769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.927 [2024-07-14 10:16:01.361784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:01.361812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.927 [2024-07-14 10:16:01.361828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.341626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.927 [2024-07-14 10:16:08.341681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.341749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.927 [2024-07-14 10:16:08.341770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.341792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.927 [2024-07-14 10:16:08.341807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.341827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.927 [2024-07-14 10:16:08.341842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.341879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.927 [2024-07-14 10:16:08.341896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.341916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.927 [2024-07-14 10:16:08.341930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.341949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.927 [2024-07-14 10:16:08.341964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.341983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.927 [2024-07-14 10:16:08.341997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.342017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:46624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.927 [2024-07-14 10:16:08.342031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.342051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.927 [2024-07-14 10:16:08.342065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.342084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:45936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.927 [2024-07-14 10:16:08.342098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.342118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:45952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.927 [2024-07-14 10:16:08.342133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.342152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:45968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.927 [2024-07-14 10:16:08.342166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.342186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:45976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.927 [2024-07-14 10:16:08.342200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.342250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:45984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.927 [2024-07-14 10:16:08.342268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.342289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:46016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.927 [2024-07-14 10:16:08.342304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.342335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:46056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.927 [2024-07-14 10:16:08.342351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.342374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.927 [2024-07-14 10:16:08.342390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.342411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:46088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.927 [2024-07-14 10:16:08.342427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.342447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:46096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.927 [2024-07-14 10:16:08.342463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.342483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:46104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.927 [2024-07-14 10:16:08.342498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.342535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:46112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.927 [2024-07-14 10:16:08.342565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.342587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.927 [2024-07-14 10:16:08.342602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.342637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.927 [2024-07-14 10:16:08.342652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.342672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:46152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.927 [2024-07-14 10:16:08.342686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.342706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:46160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.927 [2024-07-14 10:16:08.342721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.342742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:46640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.927 [2024-07-14 10:16:08.342756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.342776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.927 [2024-07-14 10:16:08.342819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.342844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:46656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.927 [2024-07-14 10:16:08.342884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.342922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.927 [2024-07-14 10:16:08.342938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.342959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.927 [2024-07-14 10:16:08.342974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.342996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.927 [2024-07-14 10:16:08.343011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.343049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.927 [2024-07-14 10:16:08.343065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.343088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.927 [2024-07-14 10:16:08.343104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.343140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.927 [2024-07-14 10:16:08.343156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.343178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:46712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.927 [2024-07-14 10:16:08.343193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.343362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.927 [2024-07-14 10:16:08.343387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.343415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.927 [2024-07-14 10:16:08.343432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.343455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.927 [2024-07-14 10:16:08.343471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.343495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.927 [2024-07-14 10:16:08.343510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:11.927 [2024-07-14 10:16:08.343533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:46752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.928 [2024-07-14 10:16:08.343558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.343583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:46760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.928 [2024-07-14 10:16:08.343599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.343623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.928 [2024-07-14 10:16:08.343638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.343661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:46776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.928 [2024-07-14 10:16:08.343677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.343701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.928 [2024-07-14 10:16:08.343716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.343740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:46792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.928 [2024-07-14 10:16:08.343755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.343778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.928 [2024-07-14 10:16:08.343793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.343817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:46808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.928 [2024-07-14 10:16:08.343833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.343856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:46816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.928 [2024-07-14 10:16:08.343871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.343895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.928 [2024-07-14 10:16:08.343910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.343934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:46832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.928 [2024-07-14 10:16:08.343949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.343972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.928 [2024-07-14 10:16:08.343987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.344011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.928 [2024-07-14 10:16:08.344033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.344058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:46168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.928 [2024-07-14 10:16:08.344073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.344097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:46176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.928 [2024-07-14 10:16:08.344113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.344137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:46192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.928 [2024-07-14 10:16:08.344152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.344176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.928 [2024-07-14 10:16:08.344191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.344226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:46216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.928 [2024-07-14 10:16:08.344246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.344271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.928 [2024-07-14 10:16:08.344286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.344310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:46248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.928 [2024-07-14 10:16:08.344325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.344348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.928 [2024-07-14 10:16:08.344364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.344387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:46856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.928 [2024-07-14 10:16:08.344402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.344425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:46864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.928 [2024-07-14 10:16:08.344441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.344464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.928 [2024-07-14 10:16:08.344480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.344503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:46880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.928 [2024-07-14 10:16:08.344518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.344550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:46888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.928 [2024-07-14 10:16:08.344567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.344591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.928 [2024-07-14 10:16:08.344607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.344630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.928 [2024-07-14 10:16:08.344645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.344669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.928 [2024-07-14 10:16:08.344684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.344708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:46920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.928 [2024-07-14 10:16:08.344723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.344746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.928 [2024-07-14 10:16:08.344762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.344785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:46936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.928 [2024-07-14 10:16:08.344800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.344824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.928 [2024-07-14 10:16:08.344839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.344864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:46952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.928 [2024-07-14 10:16:08.344879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.344903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.928 [2024-07-14 10:16:08.344918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.344941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.928 [2024-07-14 10:16:08.344956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.344979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.928 [2024-07-14 10:16:08.344995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.345024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:46280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.928 [2024-07-14 10:16:08.345041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.345064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.928 [2024-07-14 10:16:08.345080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.345103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:46328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.928 [2024-07-14 10:16:08.345118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.345142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.928 [2024-07-14 10:16:08.345158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:11.928 [2024-07-14 10:16:08.345182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.928 [2024-07-14 10:16:08.345197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.345235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:46360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.929 [2024-07-14 10:16:08.345251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.345274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:46368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.929 [2024-07-14 10:16:08.345290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.345313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.929 [2024-07-14 10:16:08.345328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.345352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.929 [2024-07-14 10:16:08.345367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.345390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.929 [2024-07-14 10:16:08.345406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.345429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.929 [2024-07-14 10:16:08.345444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.345468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.929 [2024-07-14 10:16:08.345483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.345507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.929 [2024-07-14 10:16:08.345530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.345554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.929 [2024-07-14 10:16:08.345570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.345593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.929 [2024-07-14 10:16:08.345609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.345633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.929 [2024-07-14 10:16:08.345648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.345671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.929 [2024-07-14 10:16:08.345687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.345710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.929 [2024-07-14 10:16:08.345725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.345748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:47000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.929 [2024-07-14 10:16:08.345764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.345787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.929 [2024-07-14 10:16:08.345803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.345827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.929 [2024-07-14 10:16:08.345842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.345865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.929 [2024-07-14 10:16:08.345881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.345904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.929 [2024-07-14 10:16:08.345920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.345943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.929 [2024-07-14 10:16:08.345958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.345981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.929 [2024-07-14 10:16:08.346003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.346028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.929 [2024-07-14 10:16:08.346043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.346067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.929 [2024-07-14 10:16:08.346082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.346105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.929 [2024-07-14 10:16:08.346120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.346144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.929 [2024-07-14 10:16:08.346159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.346183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:47088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.929 [2024-07-14 10:16:08.346207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.346251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:47096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.929 [2024-07-14 10:16:08.346267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.346291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:47104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.929 [2024-07-14 10:16:08.346307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.346331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.929 [2024-07-14 10:16:08.346347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.346378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.929 [2024-07-14 10:16:08.346395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.346419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:47128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.929 [2024-07-14 10:16:08.346434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.346458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:47136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.929 [2024-07-14 10:16:08.346474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.346505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:47144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.929 [2024-07-14 10:16:08.346521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.346853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.929 [2024-07-14 10:16:08.346880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.346914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:47160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.929 [2024-07-14 10:16:08.346932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.346962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.929 [2024-07-14 10:16:08.346979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:11.929 [2024-07-14 10:16:08.347009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:47176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.929 [2024-07-14 10:16:08.347025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:08.347054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:47184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.930 [2024-07-14 10:16:08.347070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:08.347100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:47192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.930 [2024-07-14 10:16:08.347116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:08.347160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:47200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.930 [2024-07-14 10:16:08.347176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:08.347204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.930 [2024-07-14 10:16:08.347244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:08.347276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:47216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.930 [2024-07-14 10:16:08.347293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:08.347322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:47224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.930 [2024-07-14 10:16:08.347338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:08.347367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.930 [2024-07-14 10:16:08.347382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:08.347411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:47240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.930 [2024-07-14 10:16:08.347427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:08.347470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.930 [2024-07-14 10:16:08.347488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:08.347517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.930 [2024-07-14 10:16:08.347533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:21.546002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:90208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.930 [2024-07-14 10:16:21.546046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:21.546074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:90216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.930 [2024-07-14 10:16:21.546092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:21.546109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:90248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.930 [2024-07-14 10:16:21.546123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:21.546139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:90288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.930 [2024-07-14 10:16:21.546154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:21.546193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:90296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.930 [2024-07-14 10:16:21.546226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:21.546243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:90304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.930 [2024-07-14 10:16:21.546287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:21.546305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:90312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.930 [2024-07-14 10:16:21.546320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:21.546337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:90320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.930 [2024-07-14 10:16:21.546352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:21.546369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.930 [2024-07-14 10:16:21.546384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:21.546400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:90336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.930 [2024-07-14 10:16:21.546415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:21.546432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:90344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.930 [2024-07-14 10:16:21.546463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:21.546482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.930 [2024-07-14 10:16:21.546497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:21.546513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:90360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.930 [2024-07-14 10:16:21.546528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:21.546544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.930 [2024-07-14 10:16:21.546559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:21.546576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:90376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.930 [2024-07-14 10:16:21.546621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:21.546636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.930 [2024-07-14 10:16:21.546649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:21.546664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.930 [2024-07-14 10:16:21.546679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:21.546705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.930 [2024-07-14 10:16:21.546718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:21.546732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:90408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.930 [2024-07-14 10:16:21.546745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:21.546760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:90416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.930 [2024-07-14 10:16:21.546773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:21.546797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:89720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.930 [2024-07-14 10:16:21.546831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:21.546848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:89728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.930 [2024-07-14 10:16:21.546863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:21.546890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:89736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.930 [2024-07-14 10:16:21.546905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:21.546921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:89744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.930 [2024-07-14 10:16:21.546948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:21.546966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:89752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.930 [2024-07-14 10:16:21.546982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:21.546998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:89776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.930 [2024-07-14 10:16:21.547013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:21.547029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:89784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.930 [2024-07-14 10:16:21.547044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:21.547060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:89800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.930 [2024-07-14 10:16:21.547075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:21.547092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:90424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.930 [2024-07-14 10:16:21.547106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:21.547134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.930 [2024-07-14 10:16:21.547169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.930 [2024-07-14 10:16:21.547203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.930 [2024-07-14 10:16:21.547233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.547249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.931 [2024-07-14 10:16:21.547264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.547295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.931 [2024-07-14 10:16:21.547311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.547328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:90464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.547343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.547359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.547374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.547392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:90480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.547407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.547432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:90488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.547448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.547465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:90496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.547480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.547497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:90504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.547512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.547529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:89848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.547543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.547560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:89856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.547589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.547620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:89864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.547634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.547649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:89872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.547663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.547678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:89880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.547692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.547707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:89896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.547720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.547736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:89912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.547754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.547770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:89928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.547784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.547799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.931 [2024-07-14 10:16:21.547829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.547844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.931 [2024-07-14 10:16:21.547875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.547892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:90528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.547906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.547922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:90536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.547936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.547952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.931 [2024-07-14 10:16:21.547966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.547988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.931 [2024-07-14 10:16:21.548002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.548017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:90560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.548031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.548047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:90568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.548061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.548077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:90576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.548091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.548107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:90584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.548121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.548137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.931 [2024-07-14 10:16:21.548150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.548181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.931 [2024-07-14 10:16:21.548195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.548226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:90608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.548258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.548274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:90616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.548299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.548325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:90624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.548345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.548362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.931 [2024-07-14 10:16:21.548377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.548393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:90640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.548407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.548423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:90648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.548438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.548454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:90656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.548468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.548485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.931 [2024-07-14 10:16:21.548500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.548516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:90672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.548530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.548546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:89952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.548561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.548591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.548605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.548635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:89992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.548648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.548664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:90008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.548678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.548693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:90016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.548707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.931 [2024-07-14 10:16:21.548722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:90048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.931 [2024-07-14 10:16:21.548741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.548758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:90064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.932 [2024-07-14 10:16:21.548771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.548786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:90072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.932 [2024-07-14 10:16:21.548800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.548831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:90680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.932 [2024-07-14 10:16:21.548845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.548861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.932 [2024-07-14 10:16:21.548880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.548896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:90696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.932 [2024-07-14 10:16:21.548910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.548925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.932 [2024-07-14 10:16:21.548939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.548955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:90712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.932 [2024-07-14 10:16:21.548969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.548984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:90080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.932 [2024-07-14 10:16:21.548998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.549014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:90096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.932 [2024-07-14 10:16:21.549028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.549043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:90120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.932 [2024-07-14 10:16:21.549057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.549072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:90136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.932 [2024-07-14 10:16:21.549086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.549102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:90160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.932 [2024-07-14 10:16:21.549116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.549131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:90168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.932 [2024-07-14 10:16:21.549151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.549182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:90176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.932 [2024-07-14 10:16:21.549202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.549234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:90192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.932 [2024-07-14 10:16:21.549248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.549263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:90720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.932 [2024-07-14 10:16:21.549287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.549304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:90728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.932 [2024-07-14 10:16:21.549319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.549334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:90736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.932 [2024-07-14 10:16:21.549348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.549364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:90744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.932 [2024-07-14 10:16:21.549378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.549394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:90752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.932 [2024-07-14 10:16:21.549413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.549429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:90760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.932 [2024-07-14 10:16:21.549443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.549458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:90768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.932 [2024-07-14 10:16:21.549472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.549488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:90776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.932 [2024-07-14 10:16:21.549502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.549517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:90784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.932 [2024-07-14 10:16:21.549531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.549547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.932 [2024-07-14 10:16:21.549561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.549597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:90800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.932 [2024-07-14 10:16:21.549612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.549627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.932 [2024-07-14 10:16:21.549641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.549656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:90816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.932 [2024-07-14 10:16:21.549670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.549685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:90824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.932 [2024-07-14 10:16:21.549699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.549714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:90832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.932 [2024-07-14 10:16:21.549728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.549744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:90840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.932 [2024-07-14 10:16:21.549757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.549772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.932 [2024-07-14 10:16:21.549786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.549801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:90856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.932 [2024-07-14 10:16:21.549841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.549857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.932 [2024-07-14 10:16:21.549870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.549886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:90872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.932 [2024-07-14 10:16:21.549900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.549916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:90880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.932 [2024-07-14 10:16:21.549935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.549951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:90888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.932 [2024-07-14 10:16:21.549965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.549981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:90896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.932 [2024-07-14 10:16:21.550001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.550017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:90904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.932 [2024-07-14 10:16:21.550031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.550047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:90912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.932 [2024-07-14 10:16:21.550061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.550077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:90920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.932 [2024-07-14 10:16:21.550091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.550107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:90928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.932 [2024-07-14 10:16:21.550121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.932 [2024-07-14 10:16:21.550137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:90936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:11.932 [2024-07-14 10:16:21.550165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.933 [2024-07-14 10:16:21.550181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:90944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.933 [2024-07-14 10:16:21.550194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.933 [2024-07-14 10:16:21.550225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:90952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.933 [2024-07-14 10:16:21.550243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.933 [2024-07-14 10:16:21.550269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:90960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.933 [2024-07-14 10:16:21.550286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.933 [2024-07-14 10:16:21.550303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:90200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.933 [2024-07-14 10:16:21.550318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.933 [2024-07-14 10:16:21.550334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:90224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.933 [2024-07-14 10:16:21.550349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.933 [2024-07-14 10:16:21.550365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:90232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.933 [2024-07-14 10:16:21.550379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.933 [2024-07-14 10:16:21.550395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:90240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.933 [2024-07-14 10:16:21.550410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.933 [2024-07-14 10:16:21.550433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:90256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.933 [2024-07-14 10:16:21.550448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.933 [2024-07-14 10:16:21.550465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:90264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.933 [2024-07-14 10:16:21.550484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.933 [2024-07-14 10:16:21.550501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:90272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:11.933 [2024-07-14 10:16:21.550515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.933 [2024-07-14 10:16:21.550530] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1068790 is same with the state(5) to be set 00:24:11.933 [2024-07-14 10:16:21.550548] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:11.933 [2024-07-14 10:16:21.550559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:11.933 [2024-07-14 10:16:21.550571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90280 len:8 PRP1 0x0 PRP2 0x0 00:24:11.933 [2024-07-14 10:16:21.550586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:11.933 [2024-07-14 10:16:21.550660] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1068790 was disconnected and freed. reset controller. 00:24:11.933 [2024-07-14 10:16:21.552268] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:11.933 [2024-07-14 10:16:21.552366] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1235980 (9): Bad file descriptor 00:24:11.933 [2024-07-14 10:16:21.552507] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.933 [2024-07-14 10:16:21.552564] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:11.933 [2024-07-14 10:16:21.552586] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1235980 with addr=10.0.0.2, port=4421 00:24:11.933 [2024-07-14 10:16:21.552602] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1235980 is same with the state(5) to be set 00:24:11.933 [2024-07-14 10:16:21.552626] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1235980 (9): Bad file descriptor 00:24:11.933 [2024-07-14 10:16:21.552650] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:11.933 [2024-07-14 10:16:21.552665] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:11.933 [2024-07-14 10:16:21.552679] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:11.933 [2024-07-14 10:16:21.552703] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:11.933 [2024-07-14 10:16:21.552717] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:11.933 [2024-07-14 10:16:31.603089] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:11.933 Received shutdown signal, test time was about 54.931062 seconds 00:24:11.933 00:24:11.933 Latency(us) 00:24:11.933 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.933 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:11.933 Verification LBA range: start 0x0 length 0x4000 00:24:11.933 Nvme0n1 : 54.93 12167.32 47.53 0.00 0.00 10503.18 390.98 7046430.72 00:24:11.933 =================================================================================================================== 00:24:11.933 Total : 12167.32 47.53 0.00 0.00 10503.18 390.98 7046430.72 00:24:11.933 10:16:41 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:11.933 10:16:42 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:24:11.933 10:16:42 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:24:11.933 10:16:42 -- host/multipath.sh@125 -- # nvmftestfini 00:24:11.933 10:16:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:11.933 10:16:42 -- nvmf/common.sh@116 -- # sync 00:24:11.933 10:16:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:11.933 10:16:42 -- nvmf/common.sh@119 -- # set +e 00:24:11.933 10:16:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:11.933 10:16:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:11.933 rmmod nvme_tcp 00:24:11.933 rmmod nvme_fabrics 00:24:11.933 rmmod nvme_keyring 00:24:11.933 10:16:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:11.933 10:16:42 -- nvmf/common.sh@123 -- # set -e 00:24:11.933 10:16:42 -- nvmf/common.sh@124 -- # return 0 00:24:11.933 10:16:42 -- nvmf/common.sh@477 -- # '[' -n 98220 ']' 00:24:11.933 10:16:42 -- nvmf/common.sh@478 -- # killprocess 98220 00:24:11.933 10:16:42 -- common/autotest_common.sh@926 -- # '[' -z 98220 ']' 00:24:11.933 10:16:42 -- common/autotest_common.sh@930 -- # kill -0 98220 00:24:11.933 10:16:42 -- common/autotest_common.sh@931 -- # uname 00:24:11.933 10:16:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:11.933 10:16:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 98220 00:24:11.933 killing process with pid 98220 00:24:11.933 10:16:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:11.933 10:16:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:11.933 10:16:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 98220' 00:24:11.933 10:16:42 -- common/autotest_common.sh@945 -- # kill 98220 00:24:11.933 10:16:42 -- common/autotest_common.sh@950 -- # wait 98220 00:24:11.933 10:16:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:11.933 10:16:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:11.933 10:16:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:11.933 10:16:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:11.933 10:16:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:11.933 10:16:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.933 10:16:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:11.933 10:16:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.933 10:16:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:11.933 00:24:11.933 real 1m0.491s 00:24:11.933 user 2m49.949s 00:24:11.933 sys 0m14.061s 00:24:11.933 10:16:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:11.933 10:16:42 -- common/autotest_common.sh@10 -- # set +x 00:24:11.933 ************************************ 00:24:11.933 END TEST nvmf_multipath 00:24:11.933 ************************************ 00:24:11.933 10:16:42 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:11.933 10:16:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:11.933 10:16:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:11.933 10:16:42 -- common/autotest_common.sh@10 -- # set +x 00:24:11.933 ************************************ 00:24:11.933 START TEST nvmf_timeout 00:24:11.933 ************************************ 00:24:11.933 10:16:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:24:12.192 * Looking for test storage... 00:24:12.192 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:12.192 10:16:42 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:12.192 10:16:42 -- nvmf/common.sh@7 -- # uname -s 00:24:12.192 10:16:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:12.192 10:16:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:12.192 10:16:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:12.192 10:16:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:12.192 10:16:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:12.192 10:16:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:12.192 10:16:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:12.192 10:16:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:12.192 10:16:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:12.192 10:16:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:12.192 10:16:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:24:12.192 10:16:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:24:12.192 10:16:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:12.193 10:16:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:12.193 10:16:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:12.193 10:16:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:12.193 10:16:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:12.193 10:16:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:12.193 10:16:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:12.193 10:16:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.193 10:16:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.193 10:16:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.193 10:16:42 -- paths/export.sh@5 -- # export PATH 00:24:12.193 10:16:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:12.193 10:16:42 -- nvmf/common.sh@46 -- # : 0 00:24:12.193 10:16:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:12.193 10:16:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:12.193 10:16:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:12.193 10:16:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:12.193 10:16:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:12.193 10:16:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:12.193 10:16:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:12.193 10:16:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:12.193 10:16:42 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:12.193 10:16:42 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:12.193 10:16:42 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:12.193 10:16:42 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:24:12.193 10:16:42 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:12.193 10:16:42 -- host/timeout.sh@19 -- # nvmftestinit 00:24:12.193 10:16:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:24:12.193 10:16:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:12.193 10:16:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:12.193 10:16:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:12.193 10:16:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:12.193 10:16:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:12.193 10:16:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:12.193 10:16:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:12.193 10:16:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:24:12.193 10:16:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:24:12.193 10:16:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:24:12.193 10:16:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:24:12.193 10:16:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:24:12.193 10:16:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:24:12.193 10:16:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:12.193 10:16:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:12.193 10:16:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:12.193 10:16:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:24:12.193 10:16:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:12.193 10:16:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:12.193 10:16:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:12.193 10:16:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:12.193 10:16:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:12.193 10:16:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:12.193 10:16:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:12.193 10:16:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:12.193 10:16:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:24:12.193 10:16:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:24:12.193 Cannot find device "nvmf_tgt_br" 00:24:12.193 10:16:42 -- nvmf/common.sh@154 -- # true 00:24:12.193 10:16:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:24:12.193 Cannot find device "nvmf_tgt_br2" 00:24:12.193 10:16:42 -- nvmf/common.sh@155 -- # true 00:24:12.193 10:16:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:24:12.193 10:16:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:24:12.193 Cannot find device "nvmf_tgt_br" 00:24:12.193 10:16:42 -- nvmf/common.sh@157 -- # true 00:24:12.193 10:16:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:24:12.193 Cannot find device "nvmf_tgt_br2" 00:24:12.193 10:16:42 -- nvmf/common.sh@158 -- # true 00:24:12.193 10:16:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:24:12.193 10:16:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:24:12.193 10:16:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:12.193 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:12.193 10:16:42 -- nvmf/common.sh@161 -- # true 00:24:12.193 10:16:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:12.193 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:12.193 10:16:42 -- nvmf/common.sh@162 -- # true 00:24:12.193 10:16:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:24:12.193 10:16:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:12.193 10:16:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:12.193 10:16:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:12.193 10:16:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:12.193 10:16:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:12.193 10:16:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:12.193 10:16:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:12.193 10:16:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:12.452 10:16:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:24:12.452 10:16:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:24:12.452 10:16:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:24:12.452 10:16:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:24:12.452 10:16:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:12.452 10:16:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:12.452 10:16:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:12.452 10:16:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:24:12.452 10:16:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:24:12.452 10:16:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:24:12.452 10:16:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:12.452 10:16:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:12.452 10:16:42 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:12.452 10:16:42 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:12.452 10:16:42 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:24:12.452 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:12.452 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:24:12.452 00:24:12.452 --- 10.0.0.2 ping statistics --- 00:24:12.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.452 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:24:12.452 10:16:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:24:12.452 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:12.452 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:24:12.452 00:24:12.452 --- 10.0.0.3 ping statistics --- 00:24:12.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.452 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:24:12.452 10:16:42 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:12.452 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:12.452 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:24:12.452 00:24:12.452 --- 10.0.0.1 ping statistics --- 00:24:12.452 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.452 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:24:12.452 10:16:42 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:12.452 10:16:42 -- nvmf/common.sh@421 -- # return 0 00:24:12.452 10:16:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:12.452 10:16:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:12.452 10:16:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:24:12.452 10:16:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:24:12.452 10:16:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:12.452 10:16:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:24:12.452 10:16:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:24:12.452 10:16:42 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:24:12.452 10:16:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:12.452 10:16:42 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:12.452 10:16:42 -- common/autotest_common.sh@10 -- # set +x 00:24:12.452 10:16:42 -- nvmf/common.sh@469 -- # nvmfpid=99575 00:24:12.452 10:16:42 -- nvmf/common.sh@470 -- # waitforlisten 99575 00:24:12.452 10:16:42 -- common/autotest_common.sh@819 -- # '[' -z 99575 ']' 00:24:12.452 10:16:42 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:12.452 10:16:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.452 10:16:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:12.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.452 10:16:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.452 10:16:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:12.452 10:16:42 -- common/autotest_common.sh@10 -- # set +x 00:24:12.452 [2024-07-14 10:16:42.920010] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:12.452 [2024-07-14 10:16:42.920115] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:12.710 [2024-07-14 10:16:43.058663] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:12.710 [2024-07-14 10:16:43.111561] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:12.710 [2024-07-14 10:16:43.111715] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:12.710 [2024-07-14 10:16:43.111728] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:12.710 [2024-07-14 10:16:43.111735] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:12.710 [2024-07-14 10:16:43.111877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.710 [2024-07-14 10:16:43.111886] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.274 10:16:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:13.274 10:16:43 -- common/autotest_common.sh@852 -- # return 0 00:24:13.274 10:16:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:13.274 10:16:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:13.274 10:16:43 -- common/autotest_common.sh@10 -- # set +x 00:24:13.531 10:16:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:13.531 10:16:43 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:13.531 10:16:43 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:13.789 [2024-07-14 10:16:44.101576] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.789 10:16:44 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:14.047 Malloc0 00:24:14.047 10:16:44 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:14.304 10:16:44 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:14.304 10:16:44 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:14.562 [2024-07-14 10:16:45.001474] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.562 10:16:45 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:14.562 10:16:45 -- host/timeout.sh@32 -- # bdevperf_pid=99666 00:24:14.562 10:16:45 -- host/timeout.sh@34 -- # waitforlisten 99666 /var/tmp/bdevperf.sock 00:24:14.562 10:16:45 -- common/autotest_common.sh@819 -- # '[' -z 99666 ']' 00:24:14.562 10:16:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:14.562 10:16:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:14.562 10:16:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:14.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:14.562 10:16:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:14.562 10:16:45 -- common/autotest_common.sh@10 -- # set +x 00:24:14.562 [2024-07-14 10:16:45.057823] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:14.562 [2024-07-14 10:16:45.057926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99666 ] 00:24:14.820 [2024-07-14 10:16:45.195049] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:14.820 [2024-07-14 10:16:45.279829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:15.790 10:16:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:15.790 10:16:45 -- common/autotest_common.sh@852 -- # return 0 00:24:15.790 10:16:45 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:15.790 10:16:46 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:16.048 NVMe0n1 00:24:16.048 10:16:46 -- host/timeout.sh@51 -- # rpc_pid=99714 00:24:16.048 10:16:46 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:16.048 10:16:46 -- host/timeout.sh@53 -- # sleep 1 00:24:16.048 Running I/O for 10 seconds... 00:24:16.984 10:16:47 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:17.248 [2024-07-14 10:16:47.676967] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677046] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677083] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677091] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677099] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677166] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677180] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677247] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677257] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677315] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677323] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677365] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677406] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677413] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677421] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677430] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677438] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677446] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677454] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677462] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677471] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677496] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677504] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677512] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677519] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677527] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198ab30 is same with the state(5) to be set 00:24:17.248 [2024-07-14 10:16:47.677794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:130384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.248 [2024-07-14 10:16:47.677824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.248 [2024-07-14 10:16:47.677848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:129624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.248 [2024-07-14 10:16:47.677859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.248 [2024-07-14 10:16:47.677871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:129632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.248 [2024-07-14 10:16:47.677881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.249 [2024-07-14 10:16:47.677892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:129664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.249 [2024-07-14 10:16:47.677901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.249 [2024-07-14 10:16:47.677911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:129672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.249 [2024-07-14 10:16:47.677921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.249 [2024-07-14 10:16:47.677932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:129688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.249 [2024-07-14 10:16:47.677941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.249 [2024-07-14 10:16:47.677952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:129696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.249 [2024-07-14 10:16:47.677961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.249 [2024-07-14 10:16:47.677972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:129720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.249 [2024-07-14 10:16:47.677982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.249 [2024-07-14 10:16:47.677993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:129728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.249 [2024-07-14 10:16:47.678003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.249 [2024-07-14 10:16:47.678014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:129744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.249 [2024-07-14 10:16:47.678023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.249 [2024-07-14 10:16:47.678034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:129768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.249 [2024-07-14 10:16:47.678043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.249 [2024-07-14 10:16:47.678054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:129776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.249 [2024-07-14 10:16:47.678063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.249 [2024-07-14 10:16:47.678073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:129832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.249 [2024-07-14 10:16:47.678082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.249 [2024-07-14 10:16:47.678093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:129856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.249 [2024-07-14 10:16:47.678104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.249 [2024-07-14 10:16:47.678116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:129896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.249 [2024-07-14 10:16:47.678125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.249 [2024-07-14 10:16:47.678137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:129904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.249 [2024-07-14 10:16:47.678146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.249 [2024-07-14 10:16:47.678157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:129912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.249 [2024-07-14 10:16:47.678167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.249 [2024-07-14 10:16:47.678178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:130392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.249 [2024-07-14 10:16:47.678186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.249 [2024-07-14 10:16:47.678197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:130408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.249 [2024-07-14 10:16:47.678223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.249 [2024-07-14 10:16:47.678235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:130424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.249 [2024-07-14 10:16:47.678244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.249 [2024-07-14 10:16:47.678255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:130432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.249 [2024-07-14 10:16:47.678264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.249 [2024-07-14 10:16:47.678275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:130440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.249 [2024-07-14 10:16:47.678284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.249 [2024-07-14 10:16:47.678295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:130456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.249 [2024-07-14 10:16:47.678304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.249 [2024-07-14 10:16:47.678315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:130480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.249 [2024-07-14 10:16:47.678324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.249 [2024-07-14 10:16:47.678336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:130488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.249 [2024-07-14 10:16:47.678345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.249 [2024-07-14 10:16:47.678356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:130504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.249 [2024-07-14 10:16:47.678365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.249 [2024-07-14 10:16:47.678376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:129920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.249 [2024-07-14 10:16:47.678385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.249 [2024-07-14 10:16:47.678396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:129928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.249 [2024-07-14 10:16:47.678405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.249 [2024-07-14 10:16:47.678416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:129960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.249 [2024-07-14 10:16:47.678425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.249 [2024-07-14 10:16:47.678436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:129968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.249 [2024-07-14 10:16:47.678446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.249 [2024-07-14 10:16:47.678458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:129976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.249 [2024-07-14 10:16:47.678467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.249 [2024-07-14 10:16:47.678479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:129984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.249 [2024-07-14 10:16:47.678488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.249 [2024-07-14 10:16:47.678499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:130008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.249 [2024-07-14 10:16:47.678508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.249 [2024-07-14 10:16:47.678519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:130016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.249 [2024-07-14 10:16:47.678528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.678540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:130520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.250 [2024-07-14 10:16:47.678548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.678559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:130528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.250 [2024-07-14 10:16:47.678568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.678579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:130560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.250 [2024-07-14 10:16:47.678589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.678600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:130568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.250 [2024-07-14 10:16:47.678610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.678621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:130584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.250 [2024-07-14 10:16:47.678630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.678641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:130592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.250 [2024-07-14 10:16:47.678650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.678661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:130600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.250 [2024-07-14 10:16:47.678670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.678682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.250 [2024-07-14 10:16:47.678691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.678702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:130624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.250 [2024-07-14 10:16:47.678711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.678722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:130640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.250 [2024-07-14 10:16:47.678731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.678742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:130656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.250 [2024-07-14 10:16:47.678752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.678763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:130672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.250 [2024-07-14 10:16:47.678774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.678795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:130680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.250 [2024-07-14 10:16:47.678812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.678823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:130024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.250 [2024-07-14 10:16:47.678832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.678844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:130032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.250 [2024-07-14 10:16:47.678853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.678864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:130048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.250 [2024-07-14 10:16:47.678874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.678885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:130064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.250 [2024-07-14 10:16:47.678894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.678905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:130080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.250 [2024-07-14 10:16:47.678915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.678926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:130096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.250 [2024-07-14 10:16:47.678934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.678946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:130104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.250 [2024-07-14 10:16:47.678955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.678967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.250 [2024-07-14 10:16:47.678976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.678988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:130688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.250 [2024-07-14 10:16:47.678997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.679008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:130696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.250 [2024-07-14 10:16:47.679018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.679029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:130704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.250 [2024-07-14 10:16:47.679038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.679050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.250 [2024-07-14 10:16:47.679058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.679069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:130720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.250 [2024-07-14 10:16:47.679078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.679089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:130144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.250 [2024-07-14 10:16:47.679098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.679109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:130152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.250 [2024-07-14 10:16:47.679118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.679129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:130160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.250 [2024-07-14 10:16:47.679138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.679150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:130184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.250 [2024-07-14 10:16:47.679159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.679171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.250 [2024-07-14 10:16:47.679180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.679191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:130216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.250 [2024-07-14 10:16:47.679209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.250 [2024-07-14 10:16:47.679221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:130232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.250 [2024-07-14 10:16:47.679231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:130240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.251 [2024-07-14 10:16:47.679251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:130728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.251 [2024-07-14 10:16:47.679272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:130736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.251 [2024-07-14 10:16:47.679292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:130744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.251 [2024-07-14 10:16:47.679313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:130752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.251 [2024-07-14 10:16:47.679334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:130760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.251 [2024-07-14 10:16:47.679355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:130768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.251 [2024-07-14 10:16:47.679375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:130776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.251 [2024-07-14 10:16:47.679396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:130784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.251 [2024-07-14 10:16:47.679416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:130792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.251 [2024-07-14 10:16:47.679437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:130800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.251 [2024-07-14 10:16:47.679457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:130808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.251 [2024-07-14 10:16:47.679478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:130816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.251 [2024-07-14 10:16:47.679499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:130824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.251 [2024-07-14 10:16:47.679519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:130832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.251 [2024-07-14 10:16:47.679540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:130840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.251 [2024-07-14 10:16:47.679560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:130848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.251 [2024-07-14 10:16:47.679580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:130856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.251 [2024-07-14 10:16:47.679600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:130864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.251 [2024-07-14 10:16:47.679620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:130872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.251 [2024-07-14 10:16:47.679640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:130880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.251 [2024-07-14 10:16:47.679659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:130888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.251 [2024-07-14 10:16:47.679679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:130896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.251 [2024-07-14 10:16:47.679700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:130904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.251 [2024-07-14 10:16:47.679720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:130912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.251 [2024-07-14 10:16:47.679740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:130920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.251 [2024-07-14 10:16:47.679760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.251 [2024-07-14 10:16:47.679788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:130936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.251 [2024-07-14 10:16:47.679809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:130944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.251 [2024-07-14 10:16:47.679830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:130952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.251 [2024-07-14 10:16:47.679850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.251 [2024-07-14 10:16:47.679870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:130968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.251 [2024-07-14 10:16:47.679890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.251 [2024-07-14 10:16:47.679901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.251 [2024-07-14 10:16:47.679911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.252 [2024-07-14 10:16:47.679922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:130288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.252 [2024-07-14 10:16:47.679931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.252 [2024-07-14 10:16:47.679942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:130312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.252 [2024-07-14 10:16:47.679951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.252 [2024-07-14 10:16:47.679962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:130336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.252 [2024-07-14 10:16:47.679970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.252 [2024-07-14 10:16:47.679981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:130344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.252 [2024-07-14 10:16:47.679990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.252 [2024-07-14 10:16:47.680001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:130352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.252 [2024-07-14 10:16:47.680011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.252 [2024-07-14 10:16:47.680022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:130360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.252 [2024-07-14 10:16:47.680032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.252 [2024-07-14 10:16:47.680043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:130368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.252 [2024-07-14 10:16:47.680052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.252 [2024-07-14 10:16:47.680063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:130376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.252 [2024-07-14 10:16:47.680072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.252 [2024-07-14 10:16:47.680084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:130400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.252 [2024-07-14 10:16:47.680093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.252 [2024-07-14 10:16:47.680104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:130416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.252 [2024-07-14 10:16:47.680118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.252 [2024-07-14 10:16:47.680129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.252 [2024-07-14 10:16:47.680139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.252 [2024-07-14 10:16:47.680155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:130464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.252 [2024-07-14 10:16:47.680164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.252 [2024-07-14 10:16:47.680175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:130472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.252 [2024-07-14 10:16:47.680184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.252 [2024-07-14 10:16:47.680195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:130496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.252 [2024-07-14 10:16:47.680214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.252 [2024-07-14 10:16:47.680226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:130512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.252 [2024-07-14 10:16:47.680235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.252 [2024-07-14 10:16:47.680246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:130976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.252 [2024-07-14 10:16:47.680255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.252 [2024-07-14 10:16:47.680266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:130984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:17.252 [2024-07-14 10:16:47.680275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.252 [2024-07-14 10:16:47.680286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.252 [2024-07-14 10:16:47.680295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.252 [2024-07-14 10:16:47.680306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.252 [2024-07-14 10:16:47.680315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.252 [2024-07-14 10:16:47.680327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.252 [2024-07-14 10:16:47.680336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.252 [2024-07-14 10:16:47.680350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:130536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.252 [2024-07-14 10:16:47.680360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.252 [2024-07-14 10:16:47.680371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:130544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.252 [2024-07-14 10:16:47.680380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.252 [2024-07-14 10:16:47.680391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:130552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.252 [2024-07-14 10:16:47.680400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.252 [2024-07-14 10:16:47.680411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:130576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.252 [2024-07-14 10:16:47.680420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.252 [2024-07-14 10:16:47.680431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:130608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.252 [2024-07-14 10:16:47.680439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.252 [2024-07-14 10:16:47.680450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:130632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.252 [2024-07-14 10:16:47.680464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.252 [2024-07-14 10:16:47.680475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:17.252 [2024-07-14 10:16:47.680484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.252 [2024-07-14 10:16:47.680499] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb458a0 is same with the state(5) to be set 00:24:17.252 [2024-07-14 10:16:47.680512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:17.252 [2024-07-14 10:16:47.680520] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:17.252 [2024-07-14 10:16:47.680528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:130664 len:8 PRP1 0x0 PRP2 0x0 00:24:17.252 [2024-07-14 10:16:47.680537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:17.252 [2024-07-14 10:16:47.680590] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb458a0 was disconnected and freed. reset controller. 00:24:17.252 [2024-07-14 10:16:47.680825] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.252 [2024-07-14 10:16:47.680911] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb275e0 (9): Bad file descriptor 00:24:17.252 [2024-07-14 10:16:47.681030] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.252 [2024-07-14 10:16:47.681080] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:17.252 [2024-07-14 10:16:47.681102] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb275e0 with addr=10.0.0.2, port=4420 00:24:17.252 [2024-07-14 10:16:47.681113] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb275e0 is same with the state(5) to be set 00:24:17.252 [2024-07-14 10:16:47.681132] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb275e0 (9): Bad file descriptor 00:24:17.252 [2024-07-14 10:16:47.681148] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:17.253 [2024-07-14 10:16:47.681158] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:17.253 [2024-07-14 10:16:47.681169] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:17.253 [2024-07-14 10:16:47.681189] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:17.253 [2024-07-14 10:16:47.681212] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:17.253 10:16:47 -- host/timeout.sh@56 -- # sleep 2 00:24:19.155 [2024-07-14 10:16:49.681333] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.155 [2024-07-14 10:16:49.681448] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:19.155 [2024-07-14 10:16:49.681467] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb275e0 with addr=10.0.0.2, port=4420 00:24:19.155 [2024-07-14 10:16:49.681479] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb275e0 is same with the state(5) to be set 00:24:19.155 [2024-07-14 10:16:49.681503] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb275e0 (9): Bad file descriptor 00:24:19.155 [2024-07-14 10:16:49.681522] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:19.155 [2024-07-14 10:16:49.681532] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:19.155 [2024-07-14 10:16:49.681542] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:19.155 [2024-07-14 10:16:49.681568] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:19.155 [2024-07-14 10:16:49.681579] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:19.414 10:16:49 -- host/timeout.sh@57 -- # get_controller 00:24:19.414 10:16:49 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:19.414 10:16:49 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:19.673 10:16:49 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:24:19.673 10:16:49 -- host/timeout.sh@58 -- # get_bdev 00:24:19.673 10:16:49 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:19.673 10:16:49 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:19.673 10:16:50 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:24:19.673 10:16:50 -- host/timeout.sh@61 -- # sleep 5 00:24:21.574 [2024-07-14 10:16:51.681719] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-14 10:16:51.681827] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:21.574 [2024-07-14 10:16:51.681846] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb275e0 with addr=10.0.0.2, port=4420 00:24:21.574 [2024-07-14 10:16:51.681859] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb275e0 is same with the state(5) to be set 00:24:21.574 [2024-07-14 10:16:51.681883] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb275e0 (9): Bad file descriptor 00:24:21.574 [2024-07-14 10:16:51.681901] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:21.574 [2024-07-14 10:16:51.681911] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:21.574 [2024-07-14 10:16:51.681921] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:21.574 [2024-07-14 10:16:51.681947] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:21.574 [2024-07-14 10:16:51.681958] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:23.474 [2024-07-14 10:16:53.681981] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:23.474 [2024-07-14 10:16:53.682037] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:23.474 [2024-07-14 10:16:53.682064] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:23.474 [2024-07-14 10:16:53.682074] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:23.474 [2024-07-14 10:16:53.682098] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:24.409 00:24:24.409 Latency(us) 00:24:24.409 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.409 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:24.409 Verification LBA range: start 0x0 length 0x4000 00:24:24.409 NVMe0n1 : 8.13 1998.24 7.81 15.74 0.00 63457.54 2770.39 7015926.69 00:24:24.409 =================================================================================================================== 00:24:24.409 Total : 1998.24 7.81 15.74 0.00 63457.54 2770.39 7015926.69 00:24:24.409 0 00:24:24.668 10:16:55 -- host/timeout.sh@62 -- # get_controller 00:24:24.668 10:16:55 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:24.668 10:16:55 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:24:24.927 10:16:55 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:24:24.927 10:16:55 -- host/timeout.sh@63 -- # get_bdev 00:24:24.927 10:16:55 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:24:24.927 10:16:55 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:24:25.187 10:16:55 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:24:25.187 10:16:55 -- host/timeout.sh@65 -- # wait 99714 00:24:25.187 10:16:55 -- host/timeout.sh@67 -- # killprocess 99666 00:24:25.187 10:16:55 -- common/autotest_common.sh@926 -- # '[' -z 99666 ']' 00:24:25.187 10:16:55 -- common/autotest_common.sh@930 -- # kill -0 99666 00:24:25.187 10:16:55 -- common/autotest_common.sh@931 -- # uname 00:24:25.187 10:16:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:25.187 10:16:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 99666 00:24:25.187 10:16:55 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:25.187 10:16:55 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:25.187 killing process with pid 99666 00:24:25.187 10:16:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 99666' 00:24:25.187 10:16:55 -- common/autotest_common.sh@945 -- # kill 99666 00:24:25.187 Received shutdown signal, test time was about 9.130308 seconds 00:24:25.187 00:24:25.187 Latency(us) 00:24:25.187 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:25.187 =================================================================================================================== 00:24:25.187 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:25.187 10:16:55 -- common/autotest_common.sh@950 -- # wait 99666 00:24:25.446 10:16:55 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:25.705 [2024-07-14 10:16:56.044889] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:25.705 10:16:56 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:24:25.705 10:16:56 -- host/timeout.sh@74 -- # bdevperf_pid=99868 00:24:25.705 10:16:56 -- host/timeout.sh@76 -- # waitforlisten 99868 /var/tmp/bdevperf.sock 00:24:25.705 10:16:56 -- common/autotest_common.sh@819 -- # '[' -z 99868 ']' 00:24:25.705 10:16:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:25.705 10:16:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:25.705 10:16:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:25.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:25.705 10:16:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:25.705 10:16:56 -- common/autotest_common.sh@10 -- # set +x 00:24:25.705 [2024-07-14 10:16:56.099497] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:25.705 [2024-07-14 10:16:56.099594] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid99868 ] 00:24:25.706 [2024-07-14 10:16:56.229028] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.965 [2024-07-14 10:16:56.295403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:26.532 10:16:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:26.532 10:16:57 -- common/autotest_common.sh@852 -- # return 0 00:24:26.532 10:16:57 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:26.791 10:16:57 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:24:27.051 NVMe0n1 00:24:27.051 10:16:57 -- host/timeout.sh@84 -- # rpc_pid=99916 00:24:27.051 10:16:57 -- host/timeout.sh@86 -- # sleep 1 00:24:27.051 10:16:57 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:27.309 Running I/O for 10 seconds... 00:24:28.244 10:16:58 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:28.507 [2024-07-14 10:16:58.824043] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b307a0 is same with the state(5) to be set 00:24:28.507 [2024-07-14 10:16:58.824104] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b307a0 is same with the state(5) to be set 00:24:28.507 [2024-07-14 10:16:58.824115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b307a0 is same with the state(5) to be set 00:24:28.507 [2024-07-14 10:16:58.824122] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b307a0 is same with the state(5) to be set 00:24:28.507 [2024-07-14 10:16:58.824130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b307a0 is same with the state(5) to be set 00:24:28.507 [2024-07-14 10:16:58.824137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b307a0 is same with the state(5) to be set 00:24:28.507 [2024-07-14 10:16:58.824144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b307a0 is same with the state(5) to be set 00:24:28.507 [2024-07-14 10:16:58.824152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b307a0 is same with the state(5) to be set 00:24:28.507 [2024-07-14 10:16:58.824159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b307a0 is same with the state(5) to be set 00:24:28.507 [2024-07-14 10:16:58.824166] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b307a0 is same with the state(5) to be set 00:24:28.507 [2024-07-14 10:16:58.824173] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b307a0 is same with the state(5) to be set 00:24:28.507 [2024-07-14 10:16:58.824181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b307a0 is same with the state(5) to be set 00:24:28.507 [2024-07-14 10:16:58.824188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b307a0 is same with the state(5) to be set 00:24:28.507 [2024-07-14 10:16:58.824195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b307a0 is same with the state(5) to be set 00:24:28.507 [2024-07-14 10:16:58.824202] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b307a0 is same with the state(5) to be set 00:24:28.507 [2024-07-14 10:16:58.824209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b307a0 is same with the state(5) to be set 00:24:28.507 [2024-07-14 10:16:58.824244] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b307a0 is same with the state(5) to be set 00:24:28.507 [2024-07-14 10:16:58.824254] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b307a0 is same with the state(5) to be set 00:24:28.507 [2024-07-14 10:16:58.824708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.507 [2024-07-14 10:16:58.824750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.507 [2024-07-14 10:16:58.824772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:4552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.507 [2024-07-14 10:16:58.824783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.507 [2024-07-14 10:16:58.824794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.507 [2024-07-14 10:16:58.824804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.507 [2024-07-14 10:16:58.824815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:4568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.507 [2024-07-14 10:16:58.824824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.507 [2024-07-14 10:16:58.824835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:4576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.507 [2024-07-14 10:16:58.824845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.507 [2024-07-14 10:16:58.824856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.507 [2024-07-14 10:16:58.824866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.507 [2024-07-14 10:16:58.824876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.507 [2024-07-14 10:16:58.824886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.507 [2024-07-14 10:16:58.824897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:4632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.507 [2024-07-14 10:16:58.824906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.507 [2024-07-14 10:16:58.824917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.507 [2024-07-14 10:16:58.824926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.507 [2024-07-14 10:16:58.824937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:5104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.507 [2024-07-14 10:16:58.824946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.507 [2024-07-14 10:16:58.824957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.507 [2024-07-14 10:16:58.824966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.507 [2024-07-14 10:16:58.824977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.507 [2024-07-14 10:16:58.825001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.507 [2024-07-14 10:16:58.825012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.507 [2024-07-14 10:16:58.825022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.507 [2024-07-14 10:16:58.825032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.507 [2024-07-14 10:16:58.825042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.507 [2024-07-14 10:16:58.825053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.507 [2024-07-14 10:16:58.825062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.507 [2024-07-14 10:16:58.825074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:5200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.507 [2024-07-14 10:16:58.825083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.507 [2024-07-14 10:16:58.825094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.507 [2024-07-14 10:16:58.825106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.507 [2024-07-14 10:16:58.825118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.507 [2024-07-14 10:16:58.825128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.507 [2024-07-14 10:16:58.825139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.507 [2024-07-14 10:16:58.825149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.507 [2024-07-14 10:16:58.825161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.507 [2024-07-14 10:16:58.825170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.507 [2024-07-14 10:16:58.825182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:5280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.507 [2024-07-14 10:16:58.825191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.507 [2024-07-14 10:16:58.825202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.507 [2024-07-14 10:16:58.825227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.507 [2024-07-14 10:16:58.825240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:5296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.507 [2024-07-14 10:16:58.825250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.507 [2024-07-14 10:16:58.825262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.507 [2024-07-14 10:16:58.825284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.507 [2024-07-14 10:16:58.825299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.507 [2024-07-14 10:16:58.825311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.507 [2024-07-14 10:16:58.825323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:4680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.507 [2024-07-14 10:16:58.825333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.507 [2024-07-14 10:16:58.825345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:4712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.507 [2024-07-14 10:16:58.825355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.507 [2024-07-14 10:16:58.825368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.507 [2024-07-14 10:16:58.825378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.507 [2024-07-14 10:16:58.825391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:4744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.507 [2024-07-14 10:16:58.825401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.507 [2024-07-14 10:16:58.825413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.507 [2024-07-14 10:16:58.825424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.507 [2024-07-14 10:16:58.825437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:4768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.507 [2024-07-14 10:16:58.825447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.507 [2024-07-14 10:16:58.825459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:4776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.507 [2024-07-14 10:16:58.825470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.825482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.508 [2024-07-14 10:16:58.825493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.825506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.508 [2024-07-14 10:16:58.825516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.825528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.508 [2024-07-14 10:16:58.825554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.825581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.508 [2024-07-14 10:16:58.825606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.825632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.508 [2024-07-14 10:16:58.825640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.825651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.508 [2024-07-14 10:16:58.825660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.825670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.508 [2024-07-14 10:16:58.825679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.825689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.508 [2024-07-14 10:16:58.825698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.825709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.508 [2024-07-14 10:16:58.825717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.825728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.508 [2024-07-14 10:16:58.825737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.825748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.508 [2024-07-14 10:16:58.825757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.825768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:5400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.508 [2024-07-14 10:16:58.825777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.825787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.508 [2024-07-14 10:16:58.825796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.825807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.508 [2024-07-14 10:16:58.825815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.825826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:4808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.508 [2024-07-14 10:16:58.825835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.825845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:4816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.508 [2024-07-14 10:16:58.825854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.825865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.508 [2024-07-14 10:16:58.825875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.825885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.508 [2024-07-14 10:16:58.825894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.825905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.508 [2024-07-14 10:16:58.825914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.825924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.508 [2024-07-14 10:16:58.825933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.825943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.508 [2024-07-14 10:16:58.825952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.825963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:5416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.508 [2024-07-14 10:16:58.825972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.825983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.508 [2024-07-14 10:16:58.825992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.826003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.508 [2024-07-14 10:16:58.826012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.826022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.508 [2024-07-14 10:16:58.826031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.826041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.508 [2024-07-14 10:16:58.826050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.826061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.508 [2024-07-14 10:16:58.826070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.826080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.508 [2024-07-14 10:16:58.826089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.826099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.508 [2024-07-14 10:16:58.826108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.826119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.508 [2024-07-14 10:16:58.826129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.826140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:4920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.508 [2024-07-14 10:16:58.826149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.826159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:4936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.508 [2024-07-14 10:16:58.826168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.826179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.508 [2024-07-14 10:16:58.826187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.826198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.508 [2024-07-14 10:16:58.826207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.826234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.508 [2024-07-14 10:16:58.826245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.826258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.508 [2024-07-14 10:16:58.826268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.826291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:5040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.508 [2024-07-14 10:16:58.826302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.826315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.508 [2024-07-14 10:16:58.826325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.826337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.508 [2024-07-14 10:16:58.826348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.826360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:5496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.508 [2024-07-14 10:16:58.826371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.826383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.508 [2024-07-14 10:16:58.826393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.826405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.508 [2024-07-14 10:16:58.826415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.508 [2024-07-14 10:16:58.826428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.508 [2024-07-14 10:16:58.826438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.826451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.509 [2024-07-14 10:16:58.826461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.826474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.509 [2024-07-14 10:16:58.826484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.826497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.509 [2024-07-14 10:16:58.826509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.826522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.509 [2024-07-14 10:16:58.826533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.826545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.509 [2024-07-14 10:16:58.826557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.826584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.509 [2024-07-14 10:16:58.826609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.826636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:5576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.509 [2024-07-14 10:16:58.826645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.826655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.509 [2024-07-14 10:16:58.826664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.826675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.509 [2024-07-14 10:16:58.826684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.826695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.509 [2024-07-14 10:16:58.826704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.826714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.509 [2024-07-14 10:16:58.826723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.826733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.509 [2024-07-14 10:16:58.826742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.826752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.509 [2024-07-14 10:16:58.826761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.826772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.509 [2024-07-14 10:16:58.826780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.826819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:5640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.509 [2024-07-14 10:16:58.826829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.826842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.509 [2024-07-14 10:16:58.826852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.826865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.509 [2024-07-14 10:16:58.826875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.826888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.509 [2024-07-14 10:16:58.826899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.826911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.509 [2024-07-14 10:16:58.826922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.826935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.509 [2024-07-14 10:16:58.826945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.826957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.509 [2024-07-14 10:16:58.826968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.826980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.509 [2024-07-14 10:16:58.826991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.827003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.509 [2024-07-14 10:16:58.827014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.827026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.509 [2024-07-14 10:16:58.827036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.827048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.509 [2024-07-14 10:16:58.827059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.827071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.509 [2024-07-14 10:16:58.827081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.827094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.509 [2024-07-14 10:16:58.827116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.827127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.509 [2024-07-14 10:16:58.827135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.827146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.509 [2024-07-14 10:16:58.827155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.827165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.509 [2024-07-14 10:16:58.827175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.827186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.509 [2024-07-14 10:16:58.827195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.827206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.509 [2024-07-14 10:16:58.827229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.827263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.509 [2024-07-14 10:16:58.827275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.827288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.509 [2024-07-14 10:16:58.827298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.827326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.509 [2024-07-14 10:16:58.827337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.827349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.509 [2024-07-14 10:16:58.827359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.827372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.509 [2024-07-14 10:16:58.827382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.827394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.509 [2024-07-14 10:16:58.827404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.827415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.509 [2024-07-14 10:16:58.827425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.827437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.509 [2024-07-14 10:16:58.827447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.827459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:28.509 [2024-07-14 10:16:58.827470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.827482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.509 [2024-07-14 10:16:58.827493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.509 [2024-07-14 10:16:58.827505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.509 [2024-07-14 10:16:58.827515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.510 [2024-07-14 10:16:58.827527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:5808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.510 [2024-07-14 10:16:58.827537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.510 [2024-07-14 10:16:58.827549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:5816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.510 [2024-07-14 10:16:58.827560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.510 [2024-07-14 10:16:58.827601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.510 [2024-07-14 10:16:58.827626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.510 [2024-07-14 10:16:58.827652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.510 [2024-07-14 10:16:58.827660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.510 [2024-07-14 10:16:58.827671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.510 [2024-07-14 10:16:58.827679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.510 [2024-07-14 10:16:58.827705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:5208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.510 [2024-07-14 10:16:58.827714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.510 [2024-07-14 10:16:58.827725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.510 [2024-07-14 10:16:58.827734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.510 [2024-07-14 10:16:58.827744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.510 [2024-07-14 10:16:58.827753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.510 [2024-07-14 10:16:58.827764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:28.510 [2024-07-14 10:16:58.827774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.510 [2024-07-14 10:16:58.827784] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b71780 is same with the state(5) to be set 00:24:28.510 [2024-07-14 10:16:58.827795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:28.510 [2024-07-14 10:16:58.827803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:28.510 [2024-07-14 10:16:58.827811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5264 len:8 PRP1 0x0 PRP2 0x0 00:24:28.510 [2024-07-14 10:16:58.827820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:28.510 [2024-07-14 10:16:58.827887] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b71780 was disconnected and freed. reset controller. 00:24:28.510 [2024-07-14 10:16:58.828112] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.510 [2024-07-14 10:16:58.828185] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b535e0 (9): Bad file descriptor 00:24:28.510 [2024-07-14 10:16:58.828316] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.510 [2024-07-14 10:16:58.828372] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:28.510 [2024-07-14 10:16:58.828392] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b535e0 with addr=10.0.0.2, port=4420 00:24:28.510 [2024-07-14 10:16:58.828404] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b535e0 is same with the state(5) to be set 00:24:28.510 [2024-07-14 10:16:58.828423] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b535e0 (9): Bad file descriptor 00:24:28.510 [2024-07-14 10:16:58.828441] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:28.510 [2024-07-14 10:16:58.828451] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:28.510 [2024-07-14 10:16:58.828462] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:28.510 [2024-07-14 10:16:58.828483] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:28.510 [2024-07-14 10:16:58.828494] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:28.510 10:16:58 -- host/timeout.sh@90 -- # sleep 1 00:24:29.446 [2024-07-14 10:16:59.828613] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.446 [2024-07-14 10:16:59.828726] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:29.446 [2024-07-14 10:16:59.828746] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b535e0 with addr=10.0.0.2, port=4420 00:24:29.446 [2024-07-14 10:16:59.828760] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b535e0 is same with the state(5) to be set 00:24:29.446 [2024-07-14 10:16:59.828784] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b535e0 (9): Bad file descriptor 00:24:29.446 [2024-07-14 10:16:59.828811] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:29.446 [2024-07-14 10:16:59.828821] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:29.446 [2024-07-14 10:16:59.828832] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:29.446 [2024-07-14 10:16:59.828857] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:29.446 [2024-07-14 10:16:59.828869] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.446 10:16:59 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:29.704 [2024-07-14 10:17:00.081485] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:29.704 10:17:00 -- host/timeout.sh@92 -- # wait 99916 00:24:30.640 [2024-07-14 10:17:00.845792] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:37.227 00:24:37.227 Latency(us) 00:24:37.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:37.227 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:37.227 Verification LBA range: start 0x0 length 0x4000 00:24:37.227 NVMe0n1 : 10.01 10879.62 42.50 0.00 0.00 11747.02 983.04 3019898.88 00:24:37.227 =================================================================================================================== 00:24:37.227 Total : 10879.62 42.50 0.00 0.00 11747.02 983.04 3019898.88 00:24:37.227 0 00:24:37.227 10:17:07 -- host/timeout.sh@97 -- # rpc_pid=100037 00:24:37.227 10:17:07 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:37.227 10:17:07 -- host/timeout.sh@98 -- # sleep 1 00:24:37.486 Running I/O for 10 seconds... 00:24:38.426 10:17:08 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:38.426 [2024-07-14 10:17:08.957004] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b7b0 is same with the state(5) to be set 00:24:38.426 [2024-07-14 10:17:08.957069] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b7b0 is same with the state(5) to be set 00:24:38.427 [2024-07-14 10:17:08.957097] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b7b0 is same with the state(5) to be set 00:24:38.427 [2024-07-14 10:17:08.957106] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b7b0 is same with the state(5) to be set 00:24:38.427 [2024-07-14 10:17:08.957114] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b7b0 is same with the state(5) to be set 00:24:38.427 [2024-07-14 10:17:08.957121] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b7b0 is same with the state(5) to be set 00:24:38.427 [2024-07-14 10:17:08.957129] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b7b0 is same with the state(5) to be set 00:24:38.427 [2024-07-14 10:17:08.957137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b7b0 is same with the state(5) to be set 00:24:38.427 [2024-07-14 10:17:08.957144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b7b0 is same with the state(5) to be set 00:24:38.427 [2024-07-14 10:17:08.957152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b7b0 is same with the state(5) to be set 00:24:38.427 [2024-07-14 10:17:08.957160] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b7b0 is same with the state(5) to be set 00:24:38.427 [2024-07-14 10:17:08.957167] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b7b0 is same with the state(5) to be set 00:24:38.427 [2024-07-14 10:17:08.957175] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b7b0 is same with the state(5) to be set 00:24:38.427 [2024-07-14 10:17:08.957182] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b7b0 is same with the state(5) to be set 00:24:38.427 [2024-07-14 10:17:08.957190] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b7b0 is same with the state(5) to be set 00:24:38.427 [2024-07-14 10:17:08.957197] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b7b0 is same with the state(5) to be set 00:24:38.427 [2024-07-14 10:17:08.957204] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b7b0 is same with the state(5) to be set 00:24:38.427 [2024-07-14 10:17:08.957228] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b7b0 is same with the state(5) to be set 00:24:38.427 [2024-07-14 10:17:08.957265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b7b0 is same with the state(5) to be set 00:24:38.427 [2024-07-14 10:17:08.957274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b7b0 is same with the state(5) to be set 00:24:38.427 [2024-07-14 10:17:08.957282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b7b0 is same with the state(5) to be set 00:24:38.427 [2024-07-14 10:17:08.957290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b7b0 is same with the state(5) to be set 00:24:38.427 [2024-07-14 10:17:08.957299] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b7b0 is same with the state(5) to be set 00:24:38.427 [2024-07-14 10:17:08.957307] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b7b0 is same with the state(5) to be set 00:24:38.427 [2024-07-14 10:17:08.957316] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b7b0 is same with the state(5) to be set 00:24:38.427 [2024-07-14 10:17:08.957324] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b7b0 is same with the state(5) to be set 00:24:38.427 [2024-07-14 10:17:08.957332] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b7b0 is same with the state(5) to be set 00:24:38.427 [2024-07-14 10:17:08.957340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b7b0 is same with the state(5) to be set 00:24:38.427 [2024-07-14 10:17:08.957349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b7b0 is same with the state(5) to be set 00:24:38.427 [2024-07-14 10:17:08.957357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198b7b0 is same with the state(5) to be set 00:24:38.427 [2024-07-14 10:17:08.957644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.427 [2024-07-14 10:17:08.957673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.427 [2024-07-14 10:17:08.957694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.427 [2024-07-14 10:17:08.957705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.427 [2024-07-14 10:17:08.957717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.427 [2024-07-14 10:17:08.957728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.427 [2024-07-14 10:17:08.957739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:6056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.427 [2024-07-14 10:17:08.957749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.427 [2024-07-14 10:17:08.957760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.427 [2024-07-14 10:17:08.957770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.427 [2024-07-14 10:17:08.957782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:6080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.427 [2024-07-14 10:17:08.957792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.427 [2024-07-14 10:17:08.957803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:6096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.427 [2024-07-14 10:17:08.957814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.427 [2024-07-14 10:17:08.957825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.427 [2024-07-14 10:17:08.957835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.427 [2024-07-14 10:17:08.957846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:6136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.427 [2024-07-14 10:17:08.957855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.427 [2024-07-14 10:17:08.957867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.427 [2024-07-14 10:17:08.957877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.427 [2024-07-14 10:17:08.957888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.427 [2024-07-14 10:17:08.957898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.427 [2024-07-14 10:17:08.957909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:5424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.427 [2024-07-14 10:17:08.957918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.427 [2024-07-14 10:17:08.957947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.427 [2024-07-14 10:17:08.957958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.427 [2024-07-14 10:17:08.957970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.427 [2024-07-14 10:17:08.957980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.427 [2024-07-14 10:17:08.957991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:5472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.427 [2024-07-14 10:17:08.958002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.427 [2024-07-14 10:17:08.958013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:5480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.427 [2024-07-14 10:17:08.958023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.427 [2024-07-14 10:17:08.958035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.427 [2024-07-14 10:17:08.958047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.427 [2024-07-14 10:17:08.958059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:5504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.427 [2024-07-14 10:17:08.958069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.427 [2024-07-14 10:17:08.958080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:5560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.427 [2024-07-14 10:17:08.958090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.427 [2024-07-14 10:17:08.958102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.428 [2024-07-14 10:17:08.958112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.428 [2024-07-14 10:17:08.958133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.428 [2024-07-14 10:17:08.958154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.428 [2024-07-14 10:17:08.958176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.428 [2024-07-14 10:17:08.958198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:5656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.428 [2024-07-14 10:17:08.958237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.428 [2024-07-14 10:17:08.958272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.428 [2024-07-14 10:17:08.958297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:6176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.428 [2024-07-14 10:17:08.958319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:6208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.428 [2024-07-14 10:17:08.958343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.428 [2024-07-14 10:17:08.958366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.428 [2024-07-14 10:17:08.958389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.428 [2024-07-14 10:17:08.958411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.428 [2024-07-14 10:17:08.958434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:6248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.428 [2024-07-14 10:17:08.958456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.428 [2024-07-14 10:17:08.958478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.428 [2024-07-14 10:17:08.958502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.428 [2024-07-14 10:17:08.958525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:6280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.428 [2024-07-14 10:17:08.958547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.428 [2024-07-14 10:17:08.958569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.428 [2024-07-14 10:17:08.958591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.428 [2024-07-14 10:17:08.958627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:6312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.428 [2024-07-14 10:17:08.958648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.428 [2024-07-14 10:17:08.958670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.428 [2024-07-14 10:17:08.958702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.428 [2024-07-14 10:17:08.958724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.428 [2024-07-14 10:17:08.958745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:6352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.428 [2024-07-14 10:17:08.958766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:6360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.428 [2024-07-14 10:17:08.958817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:6368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.428 [2024-07-14 10:17:08.958840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:6376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.428 [2024-07-14 10:17:08.958862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:6384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.428 [2024-07-14 10:17:08.958885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.428 [2024-07-14 10:17:08.958907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.428 [2024-07-14 10:17:08.958919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:6400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.429 [2024-07-14 10:17:08.958929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.429 [2024-07-14 10:17:08.958941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.429 [2024-07-14 10:17:08.958951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.429 [2024-07-14 10:17:08.958963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.429 [2024-07-14 10:17:08.958973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.429 [2024-07-14 10:17:08.958985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:6424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.429 [2024-07-14 10:17:08.958995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.429 [2024-07-14 10:17:08.959008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.429 [2024-07-14 10:17:08.959018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.429 [2024-07-14 10:17:08.959030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.429 [2024-07-14 10:17:08.959040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.429 [2024-07-14 10:17:08.959053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.429 [2024-07-14 10:17:08.959063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.429 [2024-07-14 10:17:08.959075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.429 [2024-07-14 10:17:08.959085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.429 [2024-07-14 10:17:08.959097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.429 [2024-07-14 10:17:08.959108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.429 [2024-07-14 10:17:08.959133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.429 [2024-07-14 10:17:08.959155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.429 [2024-07-14 10:17:08.959168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:5736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.429 [2024-07-14 10:17:08.959178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.429 [2024-07-14 10:17:08.959190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.429 [2024-07-14 10:17:08.959200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.429 [2024-07-14 10:17:08.959213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:5752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.429 [2024-07-14 10:17:08.959223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.429 [2024-07-14 10:17:08.959236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.429 [2024-07-14 10:17:08.959258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.429 [2024-07-14 10:17:08.959271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:5768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.429 [2024-07-14 10:17:08.959281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.429 [2024-07-14 10:17:08.959293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:5784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.429 [2024-07-14 10:17:08.959304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.429 [2024-07-14 10:17:08.959316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.429 [2024-07-14 10:17:08.959326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.429 [2024-07-14 10:17:08.959338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:5832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.429 [2024-07-14 10:17:08.959348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.429 [2024-07-14 10:17:08.959360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.429 [2024-07-14 10:17:08.959371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.429 [2024-07-14 10:17:08.959384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.429 [2024-07-14 10:17:08.959394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.429 [2024-07-14 10:17:08.959407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.429 [2024-07-14 10:17:08.959417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.429 [2024-07-14 10:17:08.959430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.429 [2024-07-14 10:17:08.959440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.429 [2024-07-14 10:17:08.959452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.429 [2024-07-14 10:17:08.959462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.429 [2024-07-14 10:17:08.959474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.429 [2024-07-14 10:17:08.959485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.429 [2024-07-14 10:17:08.959497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.429 [2024-07-14 10:17:08.959507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.429 [2024-07-14 10:17:08.959520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.429 [2024-07-14 10:17:08.959537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.429 [2024-07-14 10:17:08.959549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.429 [2024-07-14 10:17:08.959560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.429 [2024-07-14 10:17:08.959572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.429 [2024-07-14 10:17:08.959582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.429 [2024-07-14 10:17:08.959594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.429 [2024-07-14 10:17:08.959604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.429 [2024-07-14 10:17:08.959616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.429 [2024-07-14 10:17:08.959626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.429 [2024-07-14 10:17:08.959639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:5968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.429 [2024-07-14 10:17:08.959649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.429 [2024-07-14 10:17:08.959661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.429 [2024-07-14 10:17:08.959671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.429 [2024-07-14 10:17:08.959683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.429 [2024-07-14 10:17:08.959694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.959705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.430 [2024-07-14 10:17:08.959716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.959728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:6480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.430 [2024-07-14 10:17:08.959738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.959750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.430 [2024-07-14 10:17:08.959760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.959772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:6496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.430 [2024-07-14 10:17:08.959783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.959796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:6504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.430 [2024-07-14 10:17:08.959806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.959818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.430 [2024-07-14 10:17:08.959828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.959841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.430 [2024-07-14 10:17:08.959851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.959863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:6528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.430 [2024-07-14 10:17:08.959874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.959886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.430 [2024-07-14 10:17:08.959901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.959914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:6544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.430 [2024-07-14 10:17:08.959924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.959936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.430 [2024-07-14 10:17:08.959946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.959959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:6560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.430 [2024-07-14 10:17:08.959969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.959981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:6568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.430 [2024-07-14 10:17:08.959992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.960004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:6576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.430 [2024-07-14 10:17:08.960014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.960026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.430 [2024-07-14 10:17:08.960036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.960049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:6592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.430 [2024-07-14 10:17:08.960059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.960071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.430 [2024-07-14 10:17:08.960082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.960094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:6608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.430 [2024-07-14 10:17:08.960104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.960116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.430 [2024-07-14 10:17:08.960126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.960139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:6624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.430 [2024-07-14 10:17:08.960150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.960162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.430 [2024-07-14 10:17:08.960172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.960184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.430 [2024-07-14 10:17:08.960195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.960219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.430 [2024-07-14 10:17:08.960231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.960243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.430 [2024-07-14 10:17:08.960254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.960273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.430 [2024-07-14 10:17:08.960288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.960301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:6672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.430 [2024-07-14 10:17:08.960311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.960324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.430 [2024-07-14 10:17:08.960334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.960347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.430 [2024-07-14 10:17:08.960357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.960370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.430 [2024-07-14 10:17:08.960380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.960392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:6008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.430 [2024-07-14 10:17:08.960403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.960415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.430 [2024-07-14 10:17:08.960425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.960438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:6032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.430 [2024-07-14 10:17:08.960448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.430 [2024-07-14 10:17:08.960460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.430 [2024-07-14 10:17:08.960471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.431 [2024-07-14 10:17:08.960484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:6064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.431 [2024-07-14 10:17:08.960494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.431 [2024-07-14 10:17:08.960506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.431 [2024-07-14 10:17:08.960517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.431 [2024-07-14 10:17:08.960529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.431 [2024-07-14 10:17:08.960539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.431 [2024-07-14 10:17:08.960551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.431 [2024-07-14 10:17:08.960562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.431 [2024-07-14 10:17:08.960574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.431 [2024-07-14 10:17:08.960584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.431 [2024-07-14 10:17:08.960596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:6152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.431 [2024-07-14 10:17:08.960606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.431 [2024-07-14 10:17:08.960618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:6168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.431 [2024-07-14 10:17:08.960628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.431 [2024-07-14 10:17:08.960640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.431 [2024-07-14 10:17:08.960651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.431 [2024-07-14 10:17:08.960664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.431 [2024-07-14 10:17:08.960674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.431 [2024-07-14 10:17:08.960685] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b81e30 is same with the state(5) to be set 00:24:38.431 [2024-07-14 10:17:08.960699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:38.431 [2024-07-14 10:17:08.960708] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:38.431 [2024-07-14 10:17:08.960718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6200 len:8 PRP1 0x0 PRP2 0x0 00:24:38.431 [2024-07-14 10:17:08.960728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.431 [2024-07-14 10:17:08.960782] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1b81e30 was disconnected and freed. reset controller. 00:24:38.431 [2024-07-14 10:17:08.960861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.431 [2024-07-14 10:17:08.960878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.431 [2024-07-14 10:17:08.960890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.431 [2024-07-14 10:17:08.960901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.431 [2024-07-14 10:17:08.960912] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.431 [2024-07-14 10:17:08.960922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.431 [2024-07-14 10:17:08.960933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.431 [2024-07-14 10:17:08.960943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.431 [2024-07-14 10:17:08.960953] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b535e0 is same with the state(5) to be set 00:24:38.431 [2024-07-14 10:17:08.961178] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.431 [2024-07-14 10:17:08.961227] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b535e0 (9): Bad file descriptor 00:24:38.431 [2024-07-14 10:17:08.961330] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-14 10:17:08.961383] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.431 [2024-07-14 10:17:08.961401] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b535e0 with addr=10.0.0.2, port=4420 00:24:38.431 [2024-07-14 10:17:08.961413] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b535e0 is same with the state(5) to be set 00:24:38.431 [2024-07-14 10:17:08.961438] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b535e0 (9): Bad file descriptor 00:24:38.431 [2024-07-14 10:17:08.961455] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.431 [2024-07-14 10:17:08.961465] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.431 [2024-07-14 10:17:08.961477] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.431 [2024-07-14 10:17:08.961498] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.431 [2024-07-14 10:17:08.961509] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.690 10:17:08 -- host/timeout.sh@101 -- # sleep 3 00:24:39.627 [2024-07-14 10:17:09.961576] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.627 [2024-07-14 10:17:09.961679] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.627 [2024-07-14 10:17:09.961697] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b535e0 with addr=10.0.0.2, port=4420 00:24:39.627 [2024-07-14 10:17:09.961709] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b535e0 is same with the state(5) to be set 00:24:39.627 [2024-07-14 10:17:09.961728] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b535e0 (9): Bad file descriptor 00:24:39.627 [2024-07-14 10:17:09.961743] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.627 [2024-07-14 10:17:09.961753] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.627 [2024-07-14 10:17:09.961762] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.627 [2024-07-14 10:17:09.961781] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.627 [2024-07-14 10:17:09.961791] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.563 [2024-07-14 10:17:10.961862] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.563 [2024-07-14 10:17:10.961963] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.563 [2024-07-14 10:17:10.961982] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b535e0 with addr=10.0.0.2, port=4420 00:24:40.563 [2024-07-14 10:17:10.961992] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b535e0 is same with the state(5) to be set 00:24:40.563 [2024-07-14 10:17:10.962011] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b535e0 (9): Bad file descriptor 00:24:40.563 [2024-07-14 10:17:10.962026] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.563 [2024-07-14 10:17:10.962035] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.563 [2024-07-14 10:17:10.962044] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.563 [2024-07-14 10:17:10.962063] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.563 [2024-07-14 10:17:10.962073] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.496 [2024-07-14 10:17:11.963702] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.496 [2024-07-14 10:17:11.963805] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.496 [2024-07-14 10:17:11.963823] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b535e0 with addr=10.0.0.2, port=4420 00:24:41.496 [2024-07-14 10:17:11.963835] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b535e0 is same with the state(5) to be set 00:24:41.496 [2024-07-14 10:17:11.964029] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b535e0 (9): Bad file descriptor 00:24:41.496 [2024-07-14 10:17:11.964160] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.496 [2024-07-14 10:17:11.964182] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.496 [2024-07-14 10:17:11.964194] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.496 [2024-07-14 10:17:11.966415] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.496 [2024-07-14 10:17:11.966458] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.496 10:17:11 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:41.756 [2024-07-14 10:17:12.209744] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:41.756 10:17:12 -- host/timeout.sh@103 -- # wait 100037 00:24:42.692 [2024-07-14 10:17:12.984430] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:47.960 00:24:47.961 Latency(us) 00:24:47.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.961 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:47.961 Verification LBA range: start 0x0 length 0x4000 00:24:47.961 NVMe0n1 : 10.01 9206.31 35.96 6633.95 0.00 8067.21 636.74 3019898.88 00:24:47.961 =================================================================================================================== 00:24:47.961 Total : 9206.31 35.96 6633.95 0.00 8067.21 0.00 3019898.88 00:24:47.961 0 00:24:47.961 10:17:17 -- host/timeout.sh@105 -- # killprocess 99868 00:24:47.961 10:17:17 -- common/autotest_common.sh@926 -- # '[' -z 99868 ']' 00:24:47.961 10:17:17 -- common/autotest_common.sh@930 -- # kill -0 99868 00:24:47.961 10:17:17 -- common/autotest_common.sh@931 -- # uname 00:24:47.961 10:17:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:47.961 10:17:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 99868 00:24:47.961 killing process with pid 99868 00:24:47.961 Received shutdown signal, test time was about 10.000000 seconds 00:24:47.961 00:24:47.961 Latency(us) 00:24:47.961 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.961 =================================================================================================================== 00:24:47.961 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:47.961 10:17:17 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:47.961 10:17:17 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:47.961 10:17:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 99868' 00:24:47.961 10:17:17 -- common/autotest_common.sh@945 -- # kill 99868 00:24:47.961 10:17:17 -- common/autotest_common.sh@950 -- # wait 99868 00:24:47.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:47.961 10:17:18 -- host/timeout.sh@110 -- # bdevperf_pid=100159 00:24:47.961 10:17:18 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:24:47.961 10:17:18 -- host/timeout.sh@112 -- # waitforlisten 100159 /var/tmp/bdevperf.sock 00:24:47.961 10:17:18 -- common/autotest_common.sh@819 -- # '[' -z 100159 ']' 00:24:47.961 10:17:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:47.961 10:17:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:47.961 10:17:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:47.961 10:17:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:47.961 10:17:18 -- common/autotest_common.sh@10 -- # set +x 00:24:47.961 [2024-07-14 10:17:18.107208] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:47.961 [2024-07-14 10:17:18.107535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100159 ] 00:24:47.961 [2024-07-14 10:17:18.237385] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.961 [2024-07-14 10:17:18.295348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:48.528 10:17:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:48.528 10:17:19 -- common/autotest_common.sh@852 -- # return 0 00:24:48.528 10:17:19 -- host/timeout.sh@116 -- # dtrace_pid=100187 00:24:48.528 10:17:19 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 100159 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:24:48.528 10:17:19 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:24:49.094 10:17:19 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:24:49.094 NVMe0n1 00:24:49.094 10:17:19 -- host/timeout.sh@124 -- # rpc_pid=100235 00:24:49.094 10:17:19 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:49.094 10:17:19 -- host/timeout.sh@125 -- # sleep 1 00:24:49.353 Running I/O for 10 seconds... 00:24:50.317 10:17:20 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:50.317 [2024-07-14 10:17:20.836892] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.836972] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.836999] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837008] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837015] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837023] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837031] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837040] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837047] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837055] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837063] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837070] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837085] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837092] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837107] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837115] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837123] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837130] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837137] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837144] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837152] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837167] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837174] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837181] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837188] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837195] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837203] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837227] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837274] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837282] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837290] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837298] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837305] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837313] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837329] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837336] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837344] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x198fc20 is same with the state(5) to be set 00:24:50.317 [2024-07-14 10:17:20.837668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.317 [2024-07-14 10:17:20.837725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.317 [2024-07-14 10:17:20.837749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.317 [2024-07-14 10:17:20.837760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.317 [2024-07-14 10:17:20.837772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:41344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.317 [2024-07-14 10:17:20.837782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.317 [2024-07-14 10:17:20.837794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:1224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.317 [2024-07-14 10:17:20.837803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.837814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:110768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.318 [2024-07-14 10:17:20.837822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.837833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:15000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.318 [2024-07-14 10:17:20.837842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.837852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:112400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.318 [2024-07-14 10:17:20.837861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.837871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:127848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.318 [2024-07-14 10:17:20.837880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.837891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.318 [2024-07-14 10:17:20.837899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.837910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.318 [2024-07-14 10:17:20.837918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.837928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.318 [2024-07-14 10:17:20.837937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.837948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:119368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.318 [2024-07-14 10:17:20.837957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.837967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.318 [2024-07-14 10:17:20.837976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.837987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.318 [2024-07-14 10:17:20.837996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.838007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:1608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.318 [2024-07-14 10:17:20.838016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.838026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.318 [2024-07-14 10:17:20.838035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.838045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.318 [2024-07-14 10:17:20.838055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.838066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:50232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.318 [2024-07-14 10:17:20.838075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.838086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.318 [2024-07-14 10:17:20.838095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.838106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.318 [2024-07-14 10:17:20.838115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.838126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.318 [2024-07-14 10:17:20.838134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.838145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:55512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.318 [2024-07-14 10:17:20.838154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.838165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:33360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.318 [2024-07-14 10:17:20.838174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.838184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.318 [2024-07-14 10:17:20.838193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.838235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:56680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.318 [2024-07-14 10:17:20.838246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.838258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.318 [2024-07-14 10:17:20.838267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.838278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:107280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.318 [2024-07-14 10:17:20.838287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.838298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:67040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.318 [2024-07-14 10:17:20.838307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.838318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.318 [2024-07-14 10:17:20.838327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.838338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.318 [2024-07-14 10:17:20.838347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.838358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.318 [2024-07-14 10:17:20.838367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.838378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:94664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.318 [2024-07-14 10:17:20.838387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.838398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.318 [2024-07-14 10:17:20.838409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.838421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.318 [2024-07-14 10:17:20.838431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.838442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.318 [2024-07-14 10:17:20.838451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.838462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:126768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.318 [2024-07-14 10:17:20.838472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.318 [2024-07-14 10:17:20.838483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:49584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.838492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.838503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:44896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.838511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.838522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:111480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.838531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.838542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:76912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.838551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.838562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:13920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.838572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.838583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:53952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.838592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.838603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.838627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.838638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.838647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.838657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:26344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.838666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.838677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:124880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.838686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.838696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:89544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.838705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.838716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.838725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.838735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:91448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.838744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.838755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.838764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.838776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:1688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.838811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.838823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:101600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.838832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.838844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:124560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.838853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.838864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.838874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.838884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:73584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.838894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.838905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.838915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.838926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:46032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.838935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.838947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.838957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.838968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:112872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.838977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.838988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:111752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.838997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.839008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.839017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.839028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:130352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.839037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.839048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.839058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.839069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:34016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.839078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.839089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:38600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.839100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.839126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:65680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.839135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.839146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:129416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.839155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.839165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.839174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.839185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:69552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.839203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.319 [2024-07-14 10:17:20.839225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.319 [2024-07-14 10:17:20.839234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:64336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:54584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:113344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:83888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:119560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:51624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:94816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:45912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:125632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:25352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:42072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:47704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:123392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:26312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:72712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:88064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:110640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:54144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:70464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:33784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:55096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:59224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:58824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:111336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:57200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:108352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:25624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:41624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:32912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.320 [2024-07-14 10:17:20.839921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:112296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.320 [2024-07-14 10:17:20.839930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.321 [2024-07-14 10:17:20.839940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:71096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.321 [2024-07-14 10:17:20.839949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.321 [2024-07-14 10:17:20.839959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:39200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.321 [2024-07-14 10:17:20.839968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.321 [2024-07-14 10:17:20.839979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.321 [2024-07-14 10:17:20.839987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.321 [2024-07-14 10:17:20.839998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:118424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.321 [2024-07-14 10:17:20.840007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.321 [2024-07-14 10:17:20.840017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.321 [2024-07-14 10:17:20.840027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.321 [2024-07-14 10:17:20.840038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:126160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.321 [2024-07-14 10:17:20.840047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.321 [2024-07-14 10:17:20.840057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:13848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.321 [2024-07-14 10:17:20.840066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.321 [2024-07-14 10:17:20.840077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.321 [2024-07-14 10:17:20.840086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.321 [2024-07-14 10:17:20.840096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:68024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.321 [2024-07-14 10:17:20.840106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.321 [2024-07-14 10:17:20.840117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.321 [2024-07-14 10:17:20.840126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.321 [2024-07-14 10:17:20.840137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:125304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.321 [2024-07-14 10:17:20.840146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.321 [2024-07-14 10:17:20.840157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.321 [2024-07-14 10:17:20.840165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.321 [2024-07-14 10:17:20.840176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.321 [2024-07-14 10:17:20.840184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.321 [2024-07-14 10:17:20.840195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.321 [2024-07-14 10:17:20.840204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.321 [2024-07-14 10:17:20.840239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:123808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.321 [2024-07-14 10:17:20.840250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.321 [2024-07-14 10:17:20.840261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:32792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.321 [2024-07-14 10:17:20.840270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.321 [2024-07-14 10:17:20.840281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:31568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.321 [2024-07-14 10:17:20.840290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.321 [2024-07-14 10:17:20.840301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.321 [2024-07-14 10:17:20.840310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.321 [2024-07-14 10:17:20.840321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:130760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.321 [2024-07-14 10:17:20.840330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.321 [2024-07-14 10:17:20.840342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.321 [2024-07-14 10:17:20.840351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.321 [2024-07-14 10:17:20.840362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:113688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.321 [2024-07-14 10:17:20.840371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.321 [2024-07-14 10:17:20.840382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:63520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.321 [2024-07-14 10:17:20.840391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.321 [2024-07-14 10:17:20.840402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.321 [2024-07-14 10:17:20.840411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.321 [2024-07-14 10:17:20.840422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:50.321 [2024-07-14 10:17:20.840430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.321 [2024-07-14 10:17:20.840459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:50.321 [2024-07-14 10:17:20.840469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:50.321 [2024-07-14 10:17:20.840479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109616 len:8 PRP1 0x0 PRP2 0x0 00:24:50.321 [2024-07-14 10:17:20.840493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:50.321 [2024-07-14 10:17:20.840546] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x21b58a0 was disconnected and freed. reset controller. 00:24:50.321 [2024-07-14 10:17:20.840812] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.321 [2024-07-14 10:17:20.840911] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21975e0 (9): Bad file descriptor 00:24:50.321 [2024-07-14 10:17:20.841043] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.321 [2024-07-14 10:17:20.841095] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:50.321 [2024-07-14 10:17:20.841112] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21975e0 with addr=10.0.0.2, port=4420 00:24:50.321 [2024-07-14 10:17:20.841124] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21975e0 is same with the state(5) to be set 00:24:50.321 [2024-07-14 10:17:20.841145] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21975e0 (9): Bad file descriptor 00:24:50.321 [2024-07-14 10:17:20.841162] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:50.321 [2024-07-14 10:17:20.841172] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:50.321 [2024-07-14 10:17:20.841183] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:50.321 [2024-07-14 10:17:20.841218] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:50.321 [2024-07-14 10:17:20.841233] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:50.321 10:17:20 -- host/timeout.sh@128 -- # wait 100235 00:24:52.850 [2024-07-14 10:17:22.841371] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.850 [2024-07-14 10:17:22.841485] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:52.850 [2024-07-14 10:17:22.841504] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21975e0 with addr=10.0.0.2, port=4420 00:24:52.850 [2024-07-14 10:17:22.841516] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21975e0 is same with the state(5) to be set 00:24:52.851 [2024-07-14 10:17:22.841539] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21975e0 (9): Bad file descriptor 00:24:52.851 [2024-07-14 10:17:22.841558] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:52.851 [2024-07-14 10:17:22.841567] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:52.851 [2024-07-14 10:17:22.841578] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:52.851 [2024-07-14 10:17:22.841600] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:52.851 [2024-07-14 10:17:22.841611] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:54.772 [2024-07-14 10:17:24.841718] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-14 10:17:24.841820] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:54.772 [2024-07-14 10:17:24.841839] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21975e0 with addr=10.0.0.2, port=4420 00:24:54.772 [2024-07-14 10:17:24.841850] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21975e0 is same with the state(5) to be set 00:24:54.772 [2024-07-14 10:17:24.841870] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21975e0 (9): Bad file descriptor 00:24:54.772 [2024-07-14 10:17:24.841886] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:54.772 [2024-07-14 10:17:24.841896] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:54.772 [2024-07-14 10:17:24.841905] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:54.772 [2024-07-14 10:17:24.841926] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:54.772 [2024-07-14 10:17:24.841936] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:56.670 [2024-07-14 10:17:26.841988] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:56.670 [2024-07-14 10:17:26.842038] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:56.670 [2024-07-14 10:17:26.842063] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:56.670 [2024-07-14 10:17:26.842072] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:24:56.670 [2024-07-14 10:17:26.842091] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:57.607 00:24:57.607 Latency(us) 00:24:57.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.607 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:24:57.607 NVMe0n1 : 8.16 3193.11 12.47 15.68 0.00 39856.55 3425.75 7015926.69 00:24:57.607 =================================================================================================================== 00:24:57.607 Total : 3193.11 12.47 15.68 0.00 39856.55 3425.75 7015926.69 00:24:57.607 0 00:24:57.607 10:17:27 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:57.607 Attaching 5 probes... 00:24:57.607 1239.452707: reset bdev controller NVMe0 00:24:57.607 1239.619993: reconnect bdev controller NVMe0 00:24:57.607 3239.905989: reconnect delay bdev controller NVMe0 00:24:57.607 3239.940575: reconnect bdev controller NVMe0 00:24:57.607 5240.283054: reconnect delay bdev controller NVMe0 00:24:57.607 5240.299975: reconnect bdev controller NVMe0 00:24:57.607 7240.606384: reconnect delay bdev controller NVMe0 00:24:57.607 7240.620829: reconnect bdev controller NVMe0 00:24:57.607 10:17:27 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:24:57.607 10:17:27 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:24:57.607 10:17:27 -- host/timeout.sh@136 -- # kill 100187 00:24:57.607 10:17:27 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:24:57.607 10:17:27 -- host/timeout.sh@139 -- # killprocess 100159 00:24:57.607 10:17:27 -- common/autotest_common.sh@926 -- # '[' -z 100159 ']' 00:24:57.607 10:17:27 -- common/autotest_common.sh@930 -- # kill -0 100159 00:24:57.607 10:17:27 -- common/autotest_common.sh@931 -- # uname 00:24:57.607 10:17:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:57.607 10:17:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 100159 00:24:57.607 killing process with pid 100159 00:24:57.607 Received shutdown signal, test time was about 8.222857 seconds 00:24:57.607 00:24:57.607 Latency(us) 00:24:57.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:57.607 =================================================================================================================== 00:24:57.607 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:57.607 10:17:27 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:24:57.607 10:17:27 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:24:57.607 10:17:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 100159' 00:24:57.607 10:17:27 -- common/autotest_common.sh@945 -- # kill 100159 00:24:57.607 10:17:27 -- common/autotest_common.sh@950 -- # wait 100159 00:24:57.607 10:17:28 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:57.866 10:17:28 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:24:57.866 10:17:28 -- host/timeout.sh@145 -- # nvmftestfini 00:24:57.866 10:17:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:57.866 10:17:28 -- nvmf/common.sh@116 -- # sync 00:24:57.866 10:17:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:24:57.866 10:17:28 -- nvmf/common.sh@119 -- # set +e 00:24:57.866 10:17:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:57.866 10:17:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:24:57.866 rmmod nvme_tcp 00:24:57.866 rmmod nvme_fabrics 00:24:57.866 rmmod nvme_keyring 00:24:57.866 10:17:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:57.866 10:17:28 -- nvmf/common.sh@123 -- # set -e 00:24:57.866 10:17:28 -- nvmf/common.sh@124 -- # return 0 00:24:57.866 10:17:28 -- nvmf/common.sh@477 -- # '[' -n 99575 ']' 00:24:57.866 10:17:28 -- nvmf/common.sh@478 -- # killprocess 99575 00:24:57.866 10:17:28 -- common/autotest_common.sh@926 -- # '[' -z 99575 ']' 00:24:57.866 10:17:28 -- common/autotest_common.sh@930 -- # kill -0 99575 00:24:57.866 10:17:28 -- common/autotest_common.sh@931 -- # uname 00:24:57.866 10:17:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:24:57.866 10:17:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 99575 00:24:58.125 killing process with pid 99575 00:24:58.125 10:17:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:24:58.125 10:17:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:24:58.125 10:17:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 99575' 00:24:58.125 10:17:28 -- common/autotest_common.sh@945 -- # kill 99575 00:24:58.125 10:17:28 -- common/autotest_common.sh@950 -- # wait 99575 00:24:58.125 10:17:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:58.125 10:17:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:24:58.125 10:17:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:24:58.125 10:17:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:58.125 10:17:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:24:58.125 10:17:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.125 10:17:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:58.125 10:17:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:58.125 10:17:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:24:58.125 00:24:58.125 real 0m46.237s 00:24:58.125 user 2m15.798s 00:24:58.125 sys 0m5.017s 00:24:58.125 10:17:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:58.125 ************************************ 00:24:58.125 END TEST nvmf_timeout 00:24:58.125 10:17:28 -- common/autotest_common.sh@10 -- # set +x 00:24:58.125 ************************************ 00:24:58.384 10:17:28 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:24:58.384 10:17:28 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:24:58.384 10:17:28 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:58.384 10:17:28 -- common/autotest_common.sh@10 -- # set +x 00:24:58.384 10:17:28 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:24:58.384 00:24:58.384 real 17m0.523s 00:24:58.384 user 54m4.850s 00:24:58.384 sys 3m46.043s 00:24:58.384 10:17:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:58.384 10:17:28 -- common/autotest_common.sh@10 -- # set +x 00:24:58.384 ************************************ 00:24:58.384 END TEST nvmf_tcp 00:24:58.384 ************************************ 00:24:58.384 10:17:28 -- spdk/autotest.sh@296 -- # [[ 0 -eq 0 ]] 00:24:58.384 10:17:28 -- spdk/autotest.sh@297 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:58.384 10:17:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:24:58.384 10:17:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:24:58.384 10:17:28 -- common/autotest_common.sh@10 -- # set +x 00:24:58.384 ************************************ 00:24:58.384 START TEST spdkcli_nvmf_tcp 00:24:58.384 ************************************ 00:24:58.384 10:17:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:24:58.384 * Looking for test storage... 00:24:58.384 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:24:58.384 10:17:28 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:24:58.384 10:17:28 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:24:58.384 10:17:28 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:24:58.384 10:17:28 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:58.384 10:17:28 -- nvmf/common.sh@7 -- # uname -s 00:24:58.384 10:17:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:58.384 10:17:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:58.384 10:17:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:58.384 10:17:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:58.384 10:17:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:58.384 10:17:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:58.384 10:17:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:58.384 10:17:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:58.384 10:17:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:58.384 10:17:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:58.384 10:17:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:24:58.384 10:17:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:24:58.384 10:17:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:58.384 10:17:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:58.384 10:17:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:58.384 10:17:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:58.384 10:17:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:58.384 10:17:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:58.384 10:17:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:58.384 10:17:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.384 10:17:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.384 10:17:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.384 10:17:28 -- paths/export.sh@5 -- # export PATH 00:24:58.384 10:17:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:58.384 10:17:28 -- nvmf/common.sh@46 -- # : 0 00:24:58.384 10:17:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:58.384 10:17:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:58.384 10:17:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:58.384 10:17:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:58.384 10:17:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:58.384 10:17:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:58.384 10:17:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:58.384 10:17:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:58.384 10:17:28 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:24:58.384 10:17:28 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:24:58.384 10:17:28 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:24:58.384 10:17:28 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:24:58.384 10:17:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:58.384 10:17:28 -- common/autotest_common.sh@10 -- # set +x 00:24:58.384 10:17:28 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:24:58.384 10:17:28 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=100450 00:24:58.384 10:17:28 -- spdkcli/common.sh@34 -- # waitforlisten 100450 00:24:58.384 10:17:28 -- common/autotest_common.sh@819 -- # '[' -z 100450 ']' 00:24:58.384 10:17:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.384 10:17:28 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:24:58.384 10:17:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:24:58.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.384 10:17:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.384 10:17:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:24:58.384 10:17:28 -- common/autotest_common.sh@10 -- # set +x 00:24:58.643 [2024-07-14 10:17:28.951668] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:24:58.643 [2024-07-14 10:17:28.951785] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid100450 ] 00:24:58.643 [2024-07-14 10:17:29.091468] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:58.643 [2024-07-14 10:17:29.145941] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:58.643 [2024-07-14 10:17:29.146214] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.643 [2024-07-14 10:17:29.146219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.580 10:17:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:24:59.580 10:17:29 -- common/autotest_common.sh@852 -- # return 0 00:24:59.580 10:17:29 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:24:59.580 10:17:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:24:59.580 10:17:29 -- common/autotest_common.sh@10 -- # set +x 00:24:59.580 10:17:29 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:24:59.580 10:17:29 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:24:59.580 10:17:29 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:24:59.580 10:17:29 -- common/autotest_common.sh@712 -- # xtrace_disable 00:24:59.580 10:17:29 -- common/autotest_common.sh@10 -- # set +x 00:24:59.580 10:17:29 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:24:59.580 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:24:59.580 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:24:59.580 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:24:59.580 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:24:59.580 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:24:59.580 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:24:59.580 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:59.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:24:59.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:24:59.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:59.580 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:59.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:24:59.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:59.580 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:59.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:24:59.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:24:59.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:59.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:24:59.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:59.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:24:59.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:24:59.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:24:59.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:24:59.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:24:59.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:24:59.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:24:59.580 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:24:59.580 ' 00:24:59.839 [2024-07-14 10:17:30.345938] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:02.379 [2024-07-14 10:17:32.560803] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.313 [2024-07-14 10:17:33.829816] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:05.843 [2024-07-14 10:17:36.183414] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:07.750 [2024-07-14 10:17:38.212776] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:09.659 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:09.659 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:09.659 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:09.659 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:09.659 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:09.659 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:09.659 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:09.659 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:09.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:09.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:09.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:09.659 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:09.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:09.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:09.659 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:09.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:09.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:09.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:09.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:09.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:09.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:09.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:09.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:09.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:09.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:09.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:09.659 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:09.659 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:09.659 10:17:39 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:09.659 10:17:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:09.659 10:17:39 -- common/autotest_common.sh@10 -- # set +x 00:25:09.659 10:17:39 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:09.659 10:17:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:09.659 10:17:39 -- common/autotest_common.sh@10 -- # set +x 00:25:09.659 10:17:39 -- spdkcli/nvmf.sh@69 -- # check_match 00:25:09.659 10:17:39 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:25:09.918 10:17:40 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:09.918 10:17:40 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:09.918 10:17:40 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:09.918 10:17:40 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:09.918 10:17:40 -- common/autotest_common.sh@10 -- # set +x 00:25:09.918 10:17:40 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:09.918 10:17:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:09.918 10:17:40 -- common/autotest_common.sh@10 -- # set +x 00:25:09.918 10:17:40 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:09.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:09.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:09.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:09.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:09.918 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:09.918 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:09.918 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:09.918 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:09.918 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:09.918 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:09.918 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:09.918 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:09.918 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:09.918 ' 00:25:15.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:15.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:15.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:15.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:15.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:15.183 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:15.183 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:15.183 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:15.183 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:15.183 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:15.183 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:15.183 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:15.183 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:15.183 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:15.442 10:17:45 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:15.442 10:17:45 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:15.442 10:17:45 -- common/autotest_common.sh@10 -- # set +x 00:25:15.442 10:17:45 -- spdkcli/nvmf.sh@90 -- # killprocess 100450 00:25:15.442 10:17:45 -- common/autotest_common.sh@926 -- # '[' -z 100450 ']' 00:25:15.442 10:17:45 -- common/autotest_common.sh@930 -- # kill -0 100450 00:25:15.442 10:17:45 -- common/autotest_common.sh@931 -- # uname 00:25:15.442 10:17:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:15.442 10:17:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 100450 00:25:15.442 10:17:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:15.442 10:17:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:15.442 killing process with pid 100450 00:25:15.442 10:17:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 100450' 00:25:15.442 10:17:45 -- common/autotest_common.sh@945 -- # kill 100450 00:25:15.442 [2024-07-14 10:17:45.833261] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:15.442 10:17:45 -- common/autotest_common.sh@950 -- # wait 100450 00:25:15.702 10:17:46 -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:15.702 10:17:46 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:15.702 10:17:46 -- spdkcli/common.sh@13 -- # '[' -n 100450 ']' 00:25:15.702 10:17:46 -- spdkcli/common.sh@14 -- # killprocess 100450 00:25:15.702 10:17:46 -- common/autotest_common.sh@926 -- # '[' -z 100450 ']' 00:25:15.702 10:17:46 -- common/autotest_common.sh@930 -- # kill -0 100450 00:25:15.702 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (100450) - No such process 00:25:15.702 Process with pid 100450 is not found 00:25:15.702 10:17:46 -- common/autotest_common.sh@953 -- # echo 'Process with pid 100450 is not found' 00:25:15.702 10:17:46 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:15.702 10:17:46 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:15.702 10:17:46 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:15.702 00:25:15.702 real 0m17.223s 00:25:15.702 user 0m37.026s 00:25:15.702 sys 0m0.913s 00:25:15.702 10:17:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:15.702 10:17:46 -- common/autotest_common.sh@10 -- # set +x 00:25:15.702 ************************************ 00:25:15.702 END TEST spdkcli_nvmf_tcp 00:25:15.702 ************************************ 00:25:15.702 10:17:46 -- spdk/autotest.sh@298 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:15.702 10:17:46 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:25:15.702 10:17:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:15.702 10:17:46 -- common/autotest_common.sh@10 -- # set +x 00:25:15.702 ************************************ 00:25:15.702 START TEST nvmf_identify_passthru 00:25:15.702 ************************************ 00:25:15.702 10:17:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:15.702 * Looking for test storage... 00:25:15.702 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:15.702 10:17:46 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:15.702 10:17:46 -- nvmf/common.sh@7 -- # uname -s 00:25:15.702 10:17:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:15.702 10:17:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:15.703 10:17:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:15.703 10:17:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:15.703 10:17:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:15.703 10:17:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:15.703 10:17:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:15.703 10:17:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:15.703 10:17:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:15.703 10:17:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:15.703 10:17:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:25:15.703 10:17:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:25:15.703 10:17:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:15.703 10:17:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:15.703 10:17:46 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:15.703 10:17:46 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:15.703 10:17:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:15.703 10:17:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:15.703 10:17:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:15.703 10:17:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.703 10:17:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.703 10:17:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.703 10:17:46 -- paths/export.sh@5 -- # export PATH 00:25:15.703 10:17:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.703 10:17:46 -- nvmf/common.sh@46 -- # : 0 00:25:15.703 10:17:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:15.703 10:17:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:15.703 10:17:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:15.703 10:17:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:15.703 10:17:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:15.703 10:17:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:15.703 10:17:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:15.703 10:17:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:15.703 10:17:46 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:15.703 10:17:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:15.703 10:17:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:15.703 10:17:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:15.703 10:17:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.703 10:17:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.703 10:17:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.703 10:17:46 -- paths/export.sh@5 -- # export PATH 00:25:15.703 10:17:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.703 10:17:46 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:15.703 10:17:46 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:15.703 10:17:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:15.703 10:17:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:15.703 10:17:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:15.703 10:17:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:15.703 10:17:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.703 10:17:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:15.703 10:17:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.703 10:17:46 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:15.703 10:17:46 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:15.703 10:17:46 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:15.703 10:17:46 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:15.703 10:17:46 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:15.703 10:17:46 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:15.703 10:17:46 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:15.703 10:17:46 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:15.703 10:17:46 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:15.703 10:17:46 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:15.703 10:17:46 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:15.703 10:17:46 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:15.703 10:17:46 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:15.703 10:17:46 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:15.703 10:17:46 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:15.703 10:17:46 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:15.703 10:17:46 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:15.703 10:17:46 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:15.703 10:17:46 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:15.703 10:17:46 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:15.703 Cannot find device "nvmf_tgt_br" 00:25:15.703 10:17:46 -- nvmf/common.sh@154 -- # true 00:25:15.703 10:17:46 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:15.703 Cannot find device "nvmf_tgt_br2" 00:25:15.703 10:17:46 -- nvmf/common.sh@155 -- # true 00:25:15.703 10:17:46 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:15.703 10:17:46 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:15.703 Cannot find device "nvmf_tgt_br" 00:25:15.703 10:17:46 -- nvmf/common.sh@157 -- # true 00:25:15.703 10:17:46 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:15.703 Cannot find device "nvmf_tgt_br2" 00:25:15.703 10:17:46 -- nvmf/common.sh@158 -- # true 00:25:15.703 10:17:46 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:15.962 10:17:46 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:15.962 10:17:46 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:15.962 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:15.962 10:17:46 -- nvmf/common.sh@161 -- # true 00:25:15.962 10:17:46 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:15.962 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:15.962 10:17:46 -- nvmf/common.sh@162 -- # true 00:25:15.962 10:17:46 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:15.962 10:17:46 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:15.962 10:17:46 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:15.962 10:17:46 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:15.962 10:17:46 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:15.962 10:17:46 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:15.962 10:17:46 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:15.962 10:17:46 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:15.962 10:17:46 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:15.962 10:17:46 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:15.962 10:17:46 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:15.962 10:17:46 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:15.962 10:17:46 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:15.962 10:17:46 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:15.962 10:17:46 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:15.962 10:17:46 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:15.962 10:17:46 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:15.962 10:17:46 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:15.962 10:17:46 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:15.962 10:17:46 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:15.962 10:17:46 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:15.962 10:17:46 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:15.962 10:17:46 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:15.962 10:17:46 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:15.962 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:15.962 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:25:15.962 00:25:15.962 --- 10.0.0.2 ping statistics --- 00:25:15.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.962 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:25:15.962 10:17:46 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:15.962 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:15.962 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:25:15.962 00:25:15.962 --- 10.0.0.3 ping statistics --- 00:25:15.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.962 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:25:15.962 10:17:46 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:15.962 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:15.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:25:15.962 00:25:15.962 --- 10.0.0.1 ping statistics --- 00:25:15.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.962 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:25:15.962 10:17:46 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:15.962 10:17:46 -- nvmf/common.sh@421 -- # return 0 00:25:15.962 10:17:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:15.962 10:17:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:15.962 10:17:46 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:15.962 10:17:46 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:15.962 10:17:46 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:15.962 10:17:46 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:15.962 10:17:46 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:15.962 10:17:46 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:15.962 10:17:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:15.962 10:17:46 -- common/autotest_common.sh@10 -- # set +x 00:25:15.963 10:17:46 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:15.963 10:17:46 -- common/autotest_common.sh@1509 -- # bdfs=() 00:25:15.963 10:17:46 -- common/autotest_common.sh@1509 -- # local bdfs 00:25:15.963 10:17:46 -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:25:15.963 10:17:46 -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:25:15.963 10:17:46 -- common/autotest_common.sh@1498 -- # bdfs=() 00:25:15.963 10:17:46 -- common/autotest_common.sh@1498 -- # local bdfs 00:25:15.963 10:17:46 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:15.963 10:17:46 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:25:15.963 10:17:46 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:25:16.221 10:17:46 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:25:16.221 10:17:46 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:25:16.221 10:17:46 -- common/autotest_common.sh@1512 -- # echo 0000:00:06.0 00:25:16.221 10:17:46 -- target/identify_passthru.sh@16 -- # bdf=0000:00:06.0 00:25:16.221 10:17:46 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:06.0 ']' 00:25:16.221 10:17:46 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:16.221 10:17:46 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:16.221 10:17:46 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:16.221 10:17:46 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:25:16.221 10:17:46 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:06.0' -i 0 00:25:16.221 10:17:46 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:16.221 10:17:46 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:16.480 10:17:46 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:25:16.480 10:17:46 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:16.480 10:17:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:16.480 10:17:46 -- common/autotest_common.sh@10 -- # set +x 00:25:16.480 10:17:46 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:16.480 10:17:46 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:16.480 10:17:46 -- common/autotest_common.sh@10 -- # set +x 00:25:16.480 10:17:46 -- target/identify_passthru.sh@31 -- # nvmfpid=100945 00:25:16.480 10:17:46 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:16.480 10:17:46 -- target/identify_passthru.sh@35 -- # waitforlisten 100945 00:25:16.480 10:17:46 -- common/autotest_common.sh@819 -- # '[' -z 100945 ']' 00:25:16.480 10:17:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:16.480 10:17:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:16.480 10:17:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:16.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:16.480 10:17:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:16.480 10:17:46 -- common/autotest_common.sh@10 -- # set +x 00:25:16.480 10:17:46 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:16.480 [2024-07-14 10:17:47.016414] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:16.480 [2024-07-14 10:17:47.016525] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:16.738 [2024-07-14 10:17:47.156001] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:16.738 [2024-07-14 10:17:47.229133] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:16.738 [2024-07-14 10:17:47.229363] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:16.738 [2024-07-14 10:17:47.229377] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:16.738 [2024-07-14 10:17:47.229386] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:16.738 [2024-07-14 10:17:47.229479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:16.738 [2024-07-14 10:17:47.229604] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:16.738 [2024-07-14 10:17:47.231383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:16.738 [2024-07-14 10:17:47.231482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:17.670 10:17:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:17.670 10:17:48 -- common/autotest_common.sh@852 -- # return 0 00:25:17.670 10:17:48 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:17.670 10:17:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:17.670 10:17:48 -- common/autotest_common.sh@10 -- # set +x 00:25:17.670 10:17:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:17.670 10:17:48 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:17.670 10:17:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:17.670 10:17:48 -- common/autotest_common.sh@10 -- # set +x 00:25:17.670 [2024-07-14 10:17:48.116853] nvmf_tgt.c: 423:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:17.670 10:17:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:17.670 10:17:48 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:17.670 10:17:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:17.670 10:17:48 -- common/autotest_common.sh@10 -- # set +x 00:25:17.670 [2024-07-14 10:17:48.130860] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:17.670 10:17:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:17.670 10:17:48 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:17.670 10:17:48 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:17.670 10:17:48 -- common/autotest_common.sh@10 -- # set +x 00:25:17.670 10:17:48 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 00:25:17.670 10:17:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:17.670 10:17:48 -- common/autotest_common.sh@10 -- # set +x 00:25:17.928 Nvme0n1 00:25:17.928 10:17:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:17.928 10:17:48 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:17.928 10:17:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:17.928 10:17:48 -- common/autotest_common.sh@10 -- # set +x 00:25:17.928 10:17:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:17.928 10:17:48 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:17.928 10:17:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:17.928 10:17:48 -- common/autotest_common.sh@10 -- # set +x 00:25:17.928 10:17:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:17.929 10:17:48 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:17.929 10:17:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:17.929 10:17:48 -- common/autotest_common.sh@10 -- # set +x 00:25:17.929 [2024-07-14 10:17:48.271902] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:17.929 10:17:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:17.929 10:17:48 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:17.929 10:17:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:17.929 10:17:48 -- common/autotest_common.sh@10 -- # set +x 00:25:17.929 [2024-07-14 10:17:48.279693] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:17.929 [ 00:25:17.929 { 00:25:17.929 "allow_any_host": true, 00:25:17.929 "hosts": [], 00:25:17.929 "listen_addresses": [], 00:25:17.929 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:17.929 "subtype": "Discovery" 00:25:17.929 }, 00:25:17.929 { 00:25:17.929 "allow_any_host": true, 00:25:17.929 "hosts": [], 00:25:17.929 "listen_addresses": [ 00:25:17.929 { 00:25:17.929 "adrfam": "IPv4", 00:25:17.929 "traddr": "10.0.0.2", 00:25:17.929 "transport": "TCP", 00:25:17.929 "trsvcid": "4420", 00:25:17.929 "trtype": "TCP" 00:25:17.929 } 00:25:17.929 ], 00:25:17.929 "max_cntlid": 65519, 00:25:17.929 "max_namespaces": 1, 00:25:17.929 "min_cntlid": 1, 00:25:17.929 "model_number": "SPDK bdev Controller", 00:25:17.929 "namespaces": [ 00:25:17.929 { 00:25:17.929 "bdev_name": "Nvme0n1", 00:25:17.929 "name": "Nvme0n1", 00:25:17.929 "nguid": "F5DF9BD5757340439D65A74F11EE636C", 00:25:17.929 "nsid": 1, 00:25:17.929 "uuid": "f5df9bd5-7573-4043-9d65-a74f11ee636c" 00:25:17.929 } 00:25:17.929 ], 00:25:17.929 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:17.929 "serial_number": "SPDK00000000000001", 00:25:17.929 "subtype": "NVMe" 00:25:17.929 } 00:25:17.929 ] 00:25:17.929 10:17:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:17.929 10:17:48 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:17.929 10:17:48 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:17.929 10:17:48 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:18.187 10:17:48 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:25:18.187 10:17:48 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:18.187 10:17:48 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:18.187 10:17:48 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:18.187 10:17:48 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:25:18.187 10:17:48 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:25:18.187 10:17:48 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:25:18.187 10:17:48 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:18.187 10:17:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:18.187 10:17:48 -- common/autotest_common.sh@10 -- # set +x 00:25:18.187 10:17:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:18.446 10:17:48 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:18.446 10:17:48 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:18.446 10:17:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:18.446 10:17:48 -- nvmf/common.sh@116 -- # sync 00:25:18.446 10:17:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:25:18.446 10:17:48 -- nvmf/common.sh@119 -- # set +e 00:25:18.446 10:17:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:18.446 10:17:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:25:18.446 rmmod nvme_tcp 00:25:18.446 rmmod nvme_fabrics 00:25:18.446 rmmod nvme_keyring 00:25:18.446 10:17:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:18.446 10:17:48 -- nvmf/common.sh@123 -- # set -e 00:25:18.446 10:17:48 -- nvmf/common.sh@124 -- # return 0 00:25:18.446 10:17:48 -- nvmf/common.sh@477 -- # '[' -n 100945 ']' 00:25:18.446 10:17:48 -- nvmf/common.sh@478 -- # killprocess 100945 00:25:18.446 10:17:48 -- common/autotest_common.sh@926 -- # '[' -z 100945 ']' 00:25:18.446 10:17:48 -- common/autotest_common.sh@930 -- # kill -0 100945 00:25:18.446 10:17:48 -- common/autotest_common.sh@931 -- # uname 00:25:18.446 10:17:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:25:18.446 10:17:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 100945 00:25:18.446 10:17:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:25:18.446 10:17:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:25:18.446 killing process with pid 100945 00:25:18.446 10:17:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 100945' 00:25:18.446 10:17:48 -- common/autotest_common.sh@945 -- # kill 100945 00:25:18.446 [2024-07-14 10:17:48.860309] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:18.446 10:17:48 -- common/autotest_common.sh@950 -- # wait 100945 00:25:18.705 10:17:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:18.705 10:17:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:25:18.705 10:17:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:25:18.705 10:17:49 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:18.705 10:17:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:25:18.705 10:17:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.705 10:17:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:18.705 10:17:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.705 10:17:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:25:18.705 00:25:18.705 real 0m3.029s 00:25:18.705 user 0m7.728s 00:25:18.705 sys 0m0.770s 00:25:18.705 10:17:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:18.705 10:17:49 -- common/autotest_common.sh@10 -- # set +x 00:25:18.705 ************************************ 00:25:18.705 END TEST nvmf_identify_passthru 00:25:18.705 ************************************ 00:25:18.705 10:17:49 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:18.705 10:17:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:18.705 10:17:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:18.705 10:17:49 -- common/autotest_common.sh@10 -- # set +x 00:25:18.705 ************************************ 00:25:18.705 START TEST nvmf_dif 00:25:18.705 ************************************ 00:25:18.705 10:17:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:25:18.705 * Looking for test storage... 00:25:18.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:25:18.705 10:17:49 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:18.705 10:17:49 -- nvmf/common.sh@7 -- # uname -s 00:25:18.705 10:17:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:18.705 10:17:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:18.705 10:17:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:18.705 10:17:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:18.705 10:17:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:18.705 10:17:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:18.705 10:17:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:18.705 10:17:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:18.705 10:17:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:18.705 10:17:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:18.705 10:17:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:25:18.705 10:17:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:25:18.705 10:17:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:18.706 10:17:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:18.706 10:17:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:18.706 10:17:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:18.706 10:17:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:18.706 10:17:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:18.706 10:17:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:18.706 10:17:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.706 10:17:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.706 10:17:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.706 10:17:49 -- paths/export.sh@5 -- # export PATH 00:25:18.706 10:17:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.706 10:17:49 -- nvmf/common.sh@46 -- # : 0 00:25:18.706 10:17:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:18.706 10:17:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:18.706 10:17:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:18.706 10:17:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:18.706 10:17:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:18.706 10:17:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:18.706 10:17:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:18.706 10:17:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:18.706 10:17:49 -- target/dif.sh@15 -- # NULL_META=16 00:25:18.706 10:17:49 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:18.706 10:17:49 -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:18.706 10:17:49 -- target/dif.sh@15 -- # NULL_DIF=1 00:25:18.706 10:17:49 -- target/dif.sh@135 -- # nvmftestinit 00:25:18.706 10:17:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:25:18.706 10:17:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:18.706 10:17:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:18.706 10:17:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:18.706 10:17:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:18.706 10:17:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.706 10:17:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:18.706 10:17:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.706 10:17:49 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:25:18.706 10:17:49 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:25:18.706 10:17:49 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:25:18.706 10:17:49 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:25:18.706 10:17:49 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:25:18.706 10:17:49 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:25:18.706 10:17:49 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:18.706 10:17:49 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:18.706 10:17:49 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:18.706 10:17:49 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:25:18.706 10:17:49 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:18.706 10:17:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:18.706 10:17:49 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:18.706 10:17:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:18.706 10:17:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:18.706 10:17:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:18.706 10:17:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:18.706 10:17:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:18.706 10:17:49 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:25:18.965 10:17:49 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:25:18.965 Cannot find device "nvmf_tgt_br" 00:25:18.965 10:17:49 -- nvmf/common.sh@154 -- # true 00:25:18.965 10:17:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:25:18.965 Cannot find device "nvmf_tgt_br2" 00:25:18.965 10:17:49 -- nvmf/common.sh@155 -- # true 00:25:18.965 10:17:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:25:18.965 10:17:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:25:18.965 Cannot find device "nvmf_tgt_br" 00:25:18.965 10:17:49 -- nvmf/common.sh@157 -- # true 00:25:18.965 10:17:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:25:18.965 Cannot find device "nvmf_tgt_br2" 00:25:18.965 10:17:49 -- nvmf/common.sh@158 -- # true 00:25:18.965 10:17:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:25:18.965 10:17:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:25:18.965 10:17:49 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:18.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:18.965 10:17:49 -- nvmf/common.sh@161 -- # true 00:25:18.965 10:17:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:18.965 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:18.965 10:17:49 -- nvmf/common.sh@162 -- # true 00:25:18.965 10:17:49 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:25:18.965 10:17:49 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:18.965 10:17:49 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:18.965 10:17:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:18.965 10:17:49 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:18.965 10:17:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:18.965 10:17:49 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:18.965 10:17:49 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:18.965 10:17:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:18.965 10:17:49 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:25:18.965 10:17:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:25:18.965 10:17:49 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:25:18.965 10:17:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:25:18.965 10:17:49 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:18.965 10:17:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:18.965 10:17:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:18.965 10:17:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:25:18.965 10:17:49 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:25:18.965 10:17:49 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:25:19.223 10:17:49 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:19.223 10:17:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:19.223 10:17:49 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:19.223 10:17:49 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:19.223 10:17:49 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:25:19.223 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:19.223 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:25:19.223 00:25:19.223 --- 10.0.0.2 ping statistics --- 00:25:19.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.223 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:25:19.223 10:17:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:25:19.223 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:19.223 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.046 ms 00:25:19.223 00:25:19.223 --- 10.0.0.3 ping statistics --- 00:25:19.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.223 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:25:19.223 10:17:49 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:19.223 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:19.223 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:25:19.223 00:25:19.223 --- 10.0.0.1 ping statistics --- 00:25:19.223 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:19.223 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:25:19.223 10:17:49 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:19.223 10:17:49 -- nvmf/common.sh@421 -- # return 0 00:25:19.223 10:17:49 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:25:19.223 10:17:49 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:25:19.481 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:25:19.481 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:19.481 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:25:19.481 10:17:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:19.481 10:17:49 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:25:19.481 10:17:49 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:25:19.481 10:17:49 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:19.481 10:17:49 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:25:19.481 10:17:49 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:25:19.481 10:17:49 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:19.481 10:17:49 -- target/dif.sh@137 -- # nvmfappstart 00:25:19.481 10:17:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:19.481 10:17:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:25:19.481 10:17:49 -- common/autotest_common.sh@10 -- # set +x 00:25:19.481 10:17:49 -- nvmf/common.sh@469 -- # nvmfpid=101292 00:25:19.481 10:17:49 -- nvmf/common.sh@470 -- # waitforlisten 101292 00:25:19.481 10:17:49 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:19.481 10:17:49 -- common/autotest_common.sh@819 -- # '[' -z 101292 ']' 00:25:19.481 10:17:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:19.481 10:17:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:25:19.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:19.481 10:17:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:19.481 10:17:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:25:19.481 10:17:49 -- common/autotest_common.sh@10 -- # set +x 00:25:19.481 [2024-07-14 10:17:50.021530] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:25:19.481 [2024-07-14 10:17:50.021658] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:19.739 [2024-07-14 10:17:50.164319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:19.739 [2024-07-14 10:17:50.258269] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:19.739 [2024-07-14 10:17:50.258413] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:19.739 [2024-07-14 10:17:50.258426] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:19.739 [2024-07-14 10:17:50.258434] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:19.739 [2024-07-14 10:17:50.258457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.674 10:17:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:25:20.674 10:17:51 -- common/autotest_common.sh@852 -- # return 0 00:25:20.674 10:17:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:20.674 10:17:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:25:20.674 10:17:51 -- common/autotest_common.sh@10 -- # set +x 00:25:20.674 10:17:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:20.674 10:17:51 -- target/dif.sh@139 -- # create_transport 00:25:20.674 10:17:51 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:20.674 10:17:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:20.674 10:17:51 -- common/autotest_common.sh@10 -- # set +x 00:25:20.674 [2024-07-14 10:17:51.060582] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:20.674 10:17:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:20.674 10:17:51 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:20.674 10:17:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:20.674 10:17:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:20.674 10:17:51 -- common/autotest_common.sh@10 -- # set +x 00:25:20.674 ************************************ 00:25:20.674 START TEST fio_dif_1_default 00:25:20.674 ************************************ 00:25:20.674 10:17:51 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:25:20.674 10:17:51 -- target/dif.sh@86 -- # create_subsystems 0 00:25:20.674 10:17:51 -- target/dif.sh@28 -- # local sub 00:25:20.674 10:17:51 -- target/dif.sh@30 -- # for sub in "$@" 00:25:20.674 10:17:51 -- target/dif.sh@31 -- # create_subsystem 0 00:25:20.674 10:17:51 -- target/dif.sh@18 -- # local sub_id=0 00:25:20.674 10:17:51 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:20.674 10:17:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:20.674 10:17:51 -- common/autotest_common.sh@10 -- # set +x 00:25:20.674 bdev_null0 00:25:20.674 10:17:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:20.674 10:17:51 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:20.674 10:17:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:20.674 10:17:51 -- common/autotest_common.sh@10 -- # set +x 00:25:20.674 10:17:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:20.674 10:17:51 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:20.674 10:17:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:20.674 10:17:51 -- common/autotest_common.sh@10 -- # set +x 00:25:20.674 10:17:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:20.674 10:17:51 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:20.674 10:17:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:20.674 10:17:51 -- common/autotest_common.sh@10 -- # set +x 00:25:20.674 [2024-07-14 10:17:51.104683] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:20.674 10:17:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:20.674 10:17:51 -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:20.674 10:17:51 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:20.674 10:17:51 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:20.674 10:17:51 -- nvmf/common.sh@520 -- # config=() 00:25:20.674 10:17:51 -- nvmf/common.sh@520 -- # local subsystem config 00:25:20.674 10:17:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:20.674 10:17:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:20.674 { 00:25:20.674 "params": { 00:25:20.674 "name": "Nvme$subsystem", 00:25:20.674 "trtype": "$TEST_TRANSPORT", 00:25:20.674 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:20.674 "adrfam": "ipv4", 00:25:20.674 "trsvcid": "$NVMF_PORT", 00:25:20.674 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:20.674 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:20.674 "hdgst": ${hdgst:-false}, 00:25:20.674 "ddgst": ${ddgst:-false} 00:25:20.674 }, 00:25:20.674 "method": "bdev_nvme_attach_controller" 00:25:20.674 } 00:25:20.674 EOF 00:25:20.674 )") 00:25:20.674 10:17:51 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:20.674 10:17:51 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:20.674 10:17:51 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:20.674 10:17:51 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:20.674 10:17:51 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:20.675 10:17:51 -- nvmf/common.sh@542 -- # cat 00:25:20.675 10:17:51 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:20.675 10:17:51 -- common/autotest_common.sh@1320 -- # shift 00:25:20.675 10:17:51 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:20.675 10:17:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:20.675 10:17:51 -- target/dif.sh@82 -- # gen_fio_conf 00:25:20.675 10:17:51 -- target/dif.sh@54 -- # local file 00:25:20.675 10:17:51 -- target/dif.sh@56 -- # cat 00:25:20.675 10:17:51 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:20.675 10:17:51 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:20.675 10:17:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:20.675 10:17:51 -- nvmf/common.sh@544 -- # jq . 00:25:20.675 10:17:51 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:20.675 10:17:51 -- target/dif.sh@72 -- # (( file <= files )) 00:25:20.675 10:17:51 -- nvmf/common.sh@545 -- # IFS=, 00:25:20.675 10:17:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:20.675 "params": { 00:25:20.675 "name": "Nvme0", 00:25:20.675 "trtype": "tcp", 00:25:20.675 "traddr": "10.0.0.2", 00:25:20.675 "adrfam": "ipv4", 00:25:20.675 "trsvcid": "4420", 00:25:20.675 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:20.675 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:20.675 "hdgst": false, 00:25:20.675 "ddgst": false 00:25:20.675 }, 00:25:20.675 "method": "bdev_nvme_attach_controller" 00:25:20.675 }' 00:25:20.675 10:17:51 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:20.675 10:17:51 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:20.675 10:17:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:20.675 10:17:51 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:20.675 10:17:51 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:20.675 10:17:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:20.675 10:17:51 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:20.675 10:17:51 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:20.675 10:17:51 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:20.675 10:17:51 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:20.933 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:20.933 fio-3.35 00:25:20.933 Starting 1 thread 00:25:21.192 [2024-07-14 10:17:51.728229] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:21.192 [2024-07-14 10:17:51.728323] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:33.421 00:25:33.421 filename0: (groupid=0, jobs=1): err= 0: pid=101382: Sun Jul 14 10:18:01 2024 00:25:33.421 read: IOPS=698, BW=2795KiB/s (2862kB/s)(27.3MiB/10001msec) 00:25:33.421 slat (nsec): min=6327, max=75266, avg=7867.47, stdev=3278.10 00:25:33.421 clat (usec): min=366, max=41966, avg=5700.83, stdev=13626.12 00:25:33.421 lat (usec): min=372, max=41976, avg=5708.70, stdev=13626.20 00:25:33.421 clat percentiles (usec): 00:25:33.421 | 1.00th=[ 371], 5.00th=[ 379], 10.00th=[ 388], 20.00th=[ 396], 00:25:33.421 | 30.00th=[ 408], 40.00th=[ 416], 50.00th=[ 424], 60.00th=[ 433], 00:25:33.421 | 70.00th=[ 445], 80.00th=[ 465], 90.00th=[40633], 95.00th=[41157], 00:25:33.421 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:25:33.421 | 99.99th=[42206] 00:25:33.421 bw ( KiB/s): min= 1504, max= 4608, per=98.32%, avg=2748.63, stdev=756.87, samples=19 00:25:33.421 iops : min= 376, max= 1152, avg=687.16, stdev=189.22, samples=19 00:25:33.421 lat (usec) : 500=84.93%, 750=1.96%, 1000=0.04% 00:25:33.421 lat (msec) : 2=0.01%, 50=13.05% 00:25:33.421 cpu : usr=92.24%, sys=7.20%, ctx=21, majf=0, minf=0 00:25:33.421 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:33.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:33.421 issued rwts: total=6988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:33.421 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:33.421 00:25:33.421 Run status group 0 (all jobs): 00:25:33.421 READ: bw=2795KiB/s (2862kB/s), 2795KiB/s-2795KiB/s (2862kB/s-2862kB/s), io=27.3MiB (28.6MB), run=10001-10001msec 00:25:33.421 10:18:02 -- target/dif.sh@88 -- # destroy_subsystems 0 00:25:33.421 10:18:02 -- target/dif.sh@43 -- # local sub 00:25:33.421 10:18:02 -- target/dif.sh@45 -- # for sub in "$@" 00:25:33.421 10:18:02 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:33.421 10:18:02 -- target/dif.sh@36 -- # local sub_id=0 00:25:33.421 10:18:02 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:33.421 10:18:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.421 10:18:02 -- common/autotest_common.sh@10 -- # set +x 00:25:33.421 10:18:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.421 10:18:02 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:33.421 10:18:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.421 10:18:02 -- common/autotest_common.sh@10 -- # set +x 00:25:33.421 10:18:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.421 00:25:33.421 real 0m10.994s 00:25:33.421 user 0m9.873s 00:25:33.421 sys 0m0.985s 00:25:33.421 10:18:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:33.421 10:18:02 -- common/autotest_common.sh@10 -- # set +x 00:25:33.421 ************************************ 00:25:33.421 END TEST fio_dif_1_default 00:25:33.421 ************************************ 00:25:33.421 10:18:02 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:25:33.421 10:18:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:33.421 10:18:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:33.421 10:18:02 -- common/autotest_common.sh@10 -- # set +x 00:25:33.421 ************************************ 00:25:33.421 START TEST fio_dif_1_multi_subsystems 00:25:33.421 ************************************ 00:25:33.421 10:18:02 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:25:33.421 10:18:02 -- target/dif.sh@92 -- # local files=1 00:25:33.421 10:18:02 -- target/dif.sh@94 -- # create_subsystems 0 1 00:25:33.421 10:18:02 -- target/dif.sh@28 -- # local sub 00:25:33.421 10:18:02 -- target/dif.sh@30 -- # for sub in "$@" 00:25:33.421 10:18:02 -- target/dif.sh@31 -- # create_subsystem 0 00:25:33.421 10:18:02 -- target/dif.sh@18 -- # local sub_id=0 00:25:33.421 10:18:02 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:33.421 10:18:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.421 10:18:02 -- common/autotest_common.sh@10 -- # set +x 00:25:33.421 bdev_null0 00:25:33.421 10:18:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.421 10:18:02 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:33.421 10:18:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.421 10:18:02 -- common/autotest_common.sh@10 -- # set +x 00:25:33.421 10:18:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.421 10:18:02 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:33.421 10:18:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.421 10:18:02 -- common/autotest_common.sh@10 -- # set +x 00:25:33.421 10:18:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.421 10:18:02 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:33.421 10:18:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.421 10:18:02 -- common/autotest_common.sh@10 -- # set +x 00:25:33.421 [2024-07-14 10:18:02.149071] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:33.421 10:18:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.421 10:18:02 -- target/dif.sh@30 -- # for sub in "$@" 00:25:33.421 10:18:02 -- target/dif.sh@31 -- # create_subsystem 1 00:25:33.421 10:18:02 -- target/dif.sh@18 -- # local sub_id=1 00:25:33.421 10:18:02 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:25:33.421 10:18:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.421 10:18:02 -- common/autotest_common.sh@10 -- # set +x 00:25:33.421 bdev_null1 00:25:33.421 10:18:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.421 10:18:02 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:33.421 10:18:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.421 10:18:02 -- common/autotest_common.sh@10 -- # set +x 00:25:33.421 10:18:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.421 10:18:02 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:33.421 10:18:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.421 10:18:02 -- common/autotest_common.sh@10 -- # set +x 00:25:33.421 10:18:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.421 10:18:02 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:33.421 10:18:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:33.421 10:18:02 -- common/autotest_common.sh@10 -- # set +x 00:25:33.421 10:18:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:33.421 10:18:02 -- target/dif.sh@95 -- # fio /dev/fd/62 00:25:33.421 10:18:02 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:25:33.421 10:18:02 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:25:33.421 10:18:02 -- nvmf/common.sh@520 -- # config=() 00:25:33.421 10:18:02 -- nvmf/common.sh@520 -- # local subsystem config 00:25:33.421 10:18:02 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:33.421 10:18:02 -- target/dif.sh@82 -- # gen_fio_conf 00:25:33.421 10:18:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:33.421 10:18:02 -- target/dif.sh@54 -- # local file 00:25:33.421 10:18:02 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:33.421 10:18:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:33.421 { 00:25:33.421 "params": { 00:25:33.421 "name": "Nvme$subsystem", 00:25:33.421 "trtype": "$TEST_TRANSPORT", 00:25:33.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:33.421 "adrfam": "ipv4", 00:25:33.421 "trsvcid": "$NVMF_PORT", 00:25:33.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:33.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:33.421 "hdgst": ${hdgst:-false}, 00:25:33.421 "ddgst": ${ddgst:-false} 00:25:33.421 }, 00:25:33.421 "method": "bdev_nvme_attach_controller" 00:25:33.421 } 00:25:33.421 EOF 00:25:33.421 )") 00:25:33.421 10:18:02 -- target/dif.sh@56 -- # cat 00:25:33.421 10:18:02 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:33.421 10:18:02 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:33.421 10:18:02 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:33.421 10:18:02 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:33.421 10:18:02 -- common/autotest_common.sh@1320 -- # shift 00:25:33.421 10:18:02 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:33.421 10:18:02 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:33.421 10:18:02 -- nvmf/common.sh@542 -- # cat 00:25:33.421 10:18:02 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:33.421 10:18:02 -- target/dif.sh@72 -- # (( file <= files )) 00:25:33.421 10:18:02 -- target/dif.sh@73 -- # cat 00:25:33.421 10:18:02 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:33.421 10:18:02 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:33.421 10:18:02 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:33.421 10:18:02 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:33.421 10:18:02 -- target/dif.sh@72 -- # (( file++ )) 00:25:33.421 10:18:02 -- target/dif.sh@72 -- # (( file <= files )) 00:25:33.421 10:18:02 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:33.421 { 00:25:33.421 "params": { 00:25:33.421 "name": "Nvme$subsystem", 00:25:33.421 "trtype": "$TEST_TRANSPORT", 00:25:33.421 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:33.421 "adrfam": "ipv4", 00:25:33.421 "trsvcid": "$NVMF_PORT", 00:25:33.421 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:33.421 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:33.421 "hdgst": ${hdgst:-false}, 00:25:33.421 "ddgst": ${ddgst:-false} 00:25:33.421 }, 00:25:33.421 "method": "bdev_nvme_attach_controller" 00:25:33.421 } 00:25:33.421 EOF 00:25:33.421 )") 00:25:33.421 10:18:02 -- nvmf/common.sh@542 -- # cat 00:25:33.421 10:18:02 -- nvmf/common.sh@544 -- # jq . 00:25:33.421 10:18:02 -- nvmf/common.sh@545 -- # IFS=, 00:25:33.421 10:18:02 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:33.421 "params": { 00:25:33.421 "name": "Nvme0", 00:25:33.421 "trtype": "tcp", 00:25:33.421 "traddr": "10.0.0.2", 00:25:33.421 "adrfam": "ipv4", 00:25:33.421 "trsvcid": "4420", 00:25:33.422 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:33.422 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:33.422 "hdgst": false, 00:25:33.422 "ddgst": false 00:25:33.422 }, 00:25:33.422 "method": "bdev_nvme_attach_controller" 00:25:33.422 },{ 00:25:33.422 "params": { 00:25:33.422 "name": "Nvme1", 00:25:33.422 "trtype": "tcp", 00:25:33.422 "traddr": "10.0.0.2", 00:25:33.422 "adrfam": "ipv4", 00:25:33.422 "trsvcid": "4420", 00:25:33.422 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:33.422 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:33.422 "hdgst": false, 00:25:33.422 "ddgst": false 00:25:33.422 }, 00:25:33.422 "method": "bdev_nvme_attach_controller" 00:25:33.422 }' 00:25:33.422 10:18:02 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:33.422 10:18:02 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:33.422 10:18:02 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:33.422 10:18:02 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:33.422 10:18:02 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:33.422 10:18:02 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:33.422 10:18:02 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:33.422 10:18:02 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:33.422 10:18:02 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:33.422 10:18:02 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:33.422 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:33.422 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:33.422 fio-3.35 00:25:33.422 Starting 2 threads 00:25:33.422 [2024-07-14 10:18:02.939075] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:33.422 [2024-07-14 10:18:02.939193] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:43.388 00:25:43.388 filename0: (groupid=0, jobs=1): err= 0: pid=101541: Sun Jul 14 10:18:13 2024 00:25:43.388 read: IOPS=173, BW=692KiB/s (709kB/s)(6944KiB/10030msec) 00:25:43.388 slat (nsec): min=6369, max=43251, avg=8072.24, stdev=2951.55 00:25:43.388 clat (usec): min=369, max=41549, avg=23085.25, stdev=20090.56 00:25:43.388 lat (usec): min=376, max=41559, avg=23093.32, stdev=20090.59 00:25:43.388 clat percentiles (usec): 00:25:43.388 | 1.00th=[ 379], 5.00th=[ 392], 10.00th=[ 400], 20.00th=[ 416], 00:25:43.388 | 30.00th=[ 437], 40.00th=[ 478], 50.00th=[40633], 60.00th=[41157], 00:25:43.388 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:25:43.388 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:25:43.388 | 99.99th=[41681] 00:25:43.388 bw ( KiB/s): min= 384, max= 960, per=49.39%, avg=692.70, stdev=144.70, samples=20 00:25:43.388 iops : min= 96, max= 240, avg=173.15, stdev=36.17, samples=20 00:25:43.388 lat (usec) : 500=41.94%, 750=1.84% 00:25:43.388 lat (msec) : 2=0.23%, 50=55.99% 00:25:43.388 cpu : usr=95.73%, sys=3.92%, ctx=17, majf=0, minf=0 00:25:43.388 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:43.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.388 issued rwts: total=1736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.388 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:43.388 filename1: (groupid=0, jobs=1): err= 0: pid=101542: Sun Jul 14 10:18:13 2024 00:25:43.388 read: IOPS=177, BW=709KiB/s (726kB/s)(7120KiB/10038msec) 00:25:43.388 slat (nsec): min=6371, max=48667, avg=8467.40, stdev=4013.20 00:25:43.388 clat (usec): min=371, max=41465, avg=22531.74, stdev=20153.93 00:25:43.388 lat (usec): min=378, max=41476, avg=22540.21, stdev=20153.87 00:25:43.388 clat percentiles (usec): 00:25:43.388 | 1.00th=[ 379], 5.00th=[ 392], 10.00th=[ 400], 20.00th=[ 412], 00:25:43.388 | 30.00th=[ 429], 40.00th=[ 474], 50.00th=[40633], 60.00th=[41157], 00:25:43.388 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:25:43.388 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:25:43.388 | 99.99th=[41681] 00:25:43.388 bw ( KiB/s): min= 544, max= 928, per=50.68%, avg=710.30, stdev=111.12, samples=20 00:25:43.388 iops : min= 136, max= 232, avg=177.55, stdev=27.77, samples=20 00:25:43.388 lat (usec) : 500=42.08%, 750=2.87%, 1000=0.22% 00:25:43.388 lat (msec) : 2=0.22%, 50=54.61% 00:25:43.388 cpu : usr=95.82%, sys=3.85%, ctx=15, majf=0, minf=0 00:25:43.388 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:43.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.388 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:43.388 issued rwts: total=1780,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:43.388 latency : target=0, window=0, percentile=100.00%, depth=4 00:25:43.388 00:25:43.388 Run status group 0 (all jobs): 00:25:43.388 READ: bw=1401KiB/s (1435kB/s), 692KiB/s-709KiB/s (709kB/s-726kB/s), io=13.7MiB (14.4MB), run=10030-10038msec 00:25:43.388 10:18:13 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:25:43.388 10:18:13 -- target/dif.sh@43 -- # local sub 00:25:43.388 10:18:13 -- target/dif.sh@45 -- # for sub in "$@" 00:25:43.388 10:18:13 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:43.388 10:18:13 -- target/dif.sh@36 -- # local sub_id=0 00:25:43.388 10:18:13 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:43.388 10:18:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.388 10:18:13 -- common/autotest_common.sh@10 -- # set +x 00:25:43.388 10:18:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.388 10:18:13 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:43.388 10:18:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.388 10:18:13 -- common/autotest_common.sh@10 -- # set +x 00:25:43.388 10:18:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.388 10:18:13 -- target/dif.sh@45 -- # for sub in "$@" 00:25:43.388 10:18:13 -- target/dif.sh@46 -- # destroy_subsystem 1 00:25:43.388 10:18:13 -- target/dif.sh@36 -- # local sub_id=1 00:25:43.388 10:18:13 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:43.388 10:18:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.388 10:18:13 -- common/autotest_common.sh@10 -- # set +x 00:25:43.388 10:18:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.388 10:18:13 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:25:43.388 10:18:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.388 10:18:13 -- common/autotest_common.sh@10 -- # set +x 00:25:43.388 10:18:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.388 00:25:43.388 real 0m11.211s 00:25:43.388 user 0m19.974s 00:25:43.388 sys 0m1.069s 00:25:43.388 10:18:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:43.388 ************************************ 00:25:43.388 END TEST fio_dif_1_multi_subsystems 00:25:43.388 10:18:13 -- common/autotest_common.sh@10 -- # set +x 00:25:43.388 ************************************ 00:25:43.388 10:18:13 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:25:43.388 10:18:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:25:43.388 10:18:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:25:43.388 10:18:13 -- common/autotest_common.sh@10 -- # set +x 00:25:43.388 ************************************ 00:25:43.388 START TEST fio_dif_rand_params 00:25:43.388 ************************************ 00:25:43.388 10:18:13 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:25:43.388 10:18:13 -- target/dif.sh@100 -- # local NULL_DIF 00:25:43.388 10:18:13 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:25:43.388 10:18:13 -- target/dif.sh@103 -- # NULL_DIF=3 00:25:43.388 10:18:13 -- target/dif.sh@103 -- # bs=128k 00:25:43.388 10:18:13 -- target/dif.sh@103 -- # numjobs=3 00:25:43.388 10:18:13 -- target/dif.sh@103 -- # iodepth=3 00:25:43.388 10:18:13 -- target/dif.sh@103 -- # runtime=5 00:25:43.388 10:18:13 -- target/dif.sh@105 -- # create_subsystems 0 00:25:43.388 10:18:13 -- target/dif.sh@28 -- # local sub 00:25:43.388 10:18:13 -- target/dif.sh@30 -- # for sub in "$@" 00:25:43.388 10:18:13 -- target/dif.sh@31 -- # create_subsystem 0 00:25:43.388 10:18:13 -- target/dif.sh@18 -- # local sub_id=0 00:25:43.388 10:18:13 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:25:43.388 10:18:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.388 10:18:13 -- common/autotest_common.sh@10 -- # set +x 00:25:43.388 bdev_null0 00:25:43.388 10:18:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.388 10:18:13 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:43.388 10:18:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.388 10:18:13 -- common/autotest_common.sh@10 -- # set +x 00:25:43.388 10:18:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.388 10:18:13 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:43.389 10:18:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.389 10:18:13 -- common/autotest_common.sh@10 -- # set +x 00:25:43.389 10:18:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.389 10:18:13 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:43.389 10:18:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:43.389 10:18:13 -- common/autotest_common.sh@10 -- # set +x 00:25:43.389 [2024-07-14 10:18:13.414677] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:43.389 10:18:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:43.389 10:18:13 -- target/dif.sh@106 -- # fio /dev/fd/62 00:25:43.389 10:18:13 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:25:43.389 10:18:13 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:43.389 10:18:13 -- nvmf/common.sh@520 -- # config=() 00:25:43.389 10:18:13 -- nvmf/common.sh@520 -- # local subsystem config 00:25:43.389 10:18:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:43.389 10:18:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:43.389 { 00:25:43.389 "params": { 00:25:43.389 "name": "Nvme$subsystem", 00:25:43.389 "trtype": "$TEST_TRANSPORT", 00:25:43.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:43.389 "adrfam": "ipv4", 00:25:43.389 "trsvcid": "$NVMF_PORT", 00:25:43.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:43.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:43.389 "hdgst": ${hdgst:-false}, 00:25:43.389 "ddgst": ${ddgst:-false} 00:25:43.389 }, 00:25:43.389 "method": "bdev_nvme_attach_controller" 00:25:43.389 } 00:25:43.389 EOF 00:25:43.389 )") 00:25:43.389 10:18:13 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:43.389 10:18:13 -- target/dif.sh@82 -- # gen_fio_conf 00:25:43.389 10:18:13 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:43.389 10:18:13 -- target/dif.sh@54 -- # local file 00:25:43.389 10:18:13 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:43.389 10:18:13 -- target/dif.sh@56 -- # cat 00:25:43.389 10:18:13 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:43.389 10:18:13 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:43.389 10:18:13 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:43.389 10:18:13 -- common/autotest_common.sh@1320 -- # shift 00:25:43.389 10:18:13 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:43.389 10:18:13 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:43.389 10:18:13 -- nvmf/common.sh@542 -- # cat 00:25:43.389 10:18:13 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:43.389 10:18:13 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:43.389 10:18:13 -- target/dif.sh@72 -- # (( file <= files )) 00:25:43.389 10:18:13 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:43.389 10:18:13 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:43.389 10:18:13 -- nvmf/common.sh@544 -- # jq . 00:25:43.389 10:18:13 -- nvmf/common.sh@545 -- # IFS=, 00:25:43.389 10:18:13 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:43.389 "params": { 00:25:43.389 "name": "Nvme0", 00:25:43.389 "trtype": "tcp", 00:25:43.389 "traddr": "10.0.0.2", 00:25:43.389 "adrfam": "ipv4", 00:25:43.389 "trsvcid": "4420", 00:25:43.389 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:43.389 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:43.389 "hdgst": false, 00:25:43.389 "ddgst": false 00:25:43.389 }, 00:25:43.389 "method": "bdev_nvme_attach_controller" 00:25:43.389 }' 00:25:43.389 10:18:13 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:43.389 10:18:13 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:43.389 10:18:13 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:43.389 10:18:13 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:43.389 10:18:13 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:43.389 10:18:13 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:43.389 10:18:13 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:43.389 10:18:13 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:43.389 10:18:13 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:43.389 10:18:13 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:43.389 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:25:43.389 ... 00:25:43.389 fio-3.35 00:25:43.389 Starting 3 threads 00:25:43.647 [2024-07-14 10:18:14.024961] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:43.647 [2024-07-14 10:18:14.025046] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:25:48.910 00:25:48.910 filename0: (groupid=0, jobs=1): err= 0: pid=101698: Sun Jul 14 10:18:19 2024 00:25:48.910 read: IOPS=228, BW=28.6MiB/s (30.0MB/s)(143MiB/5005msec) 00:25:48.910 slat (nsec): min=6619, max=37715, avg=9387.71, stdev=4105.12 00:25:48.910 clat (usec): min=3877, max=16285, avg=13077.80, stdev=2192.02 00:25:48.910 lat (usec): min=3887, max=16297, avg=13087.18, stdev=2191.82 00:25:48.910 clat percentiles (usec): 00:25:48.910 | 1.00th=[ 3916], 5.00th=[ 8225], 10.00th=[ 9110], 20.00th=[12780], 00:25:48.910 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13698], 60.00th=[13829], 00:25:48.910 | 70.00th=[14091], 80.00th=[14353], 90.00th=[14877], 95.00th=[15139], 00:25:48.910 | 99.00th=[15795], 99.50th=[15926], 99.90th=[16319], 99.95th=[16319], 00:25:48.910 | 99.99th=[16319] 00:25:48.910 bw ( KiB/s): min=26880, max=33859, per=28.59%, avg=29267.50, stdev=1844.91, samples=10 00:25:48.910 iops : min= 210, max= 264, avg=228.60, stdev=14.27, samples=10 00:25:48.910 lat (msec) : 4=1.66%, 10=9.34%, 20=89.01% 00:25:48.910 cpu : usr=92.71%, sys=6.00%, ctx=41, majf=0, minf=0 00:25:48.910 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:48.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.910 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.910 issued rwts: total=1146,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.910 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:48.910 filename0: (groupid=0, jobs=1): err= 0: pid=101699: Sun Jul 14 10:18:19 2024 00:25:48.910 read: IOPS=299, BW=37.5MiB/s (39.3MB/s)(188MiB/5006msec) 00:25:48.910 slat (nsec): min=6645, max=38867, avg=10961.43, stdev=3579.99 00:25:48.910 clat (usec): min=5474, max=52022, avg=9995.97, stdev=4219.84 00:25:48.910 lat (usec): min=5485, max=52031, avg=10006.93, stdev=4219.74 00:25:48.910 clat percentiles (usec): 00:25:48.910 | 1.00th=[ 6390], 5.00th=[ 7570], 10.00th=[ 8455], 20.00th=[ 8979], 00:25:48.910 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[ 9896], 00:25:48.910 | 70.00th=[10159], 80.00th=[10290], 90.00th=[10814], 95.00th=[11076], 00:25:48.910 | 99.00th=[12911], 99.50th=[51119], 99.90th=[51643], 99.95th=[52167], 00:25:48.910 | 99.99th=[52167] 00:25:48.910 bw ( KiB/s): min=35584, max=41728, per=37.43%, avg=38323.20, stdev=1931.06, samples=10 00:25:48.910 iops : min= 278, max= 326, avg=299.40, stdev=15.09, samples=10 00:25:48.910 lat (msec) : 10=64.20%, 20=34.80%, 50=0.20%, 100=0.80% 00:25:48.910 cpu : usr=92.01%, sys=6.45%, ctx=6, majf=0, minf=0 00:25:48.910 IO depths : 1=0.7%, 2=99.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:48.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.910 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.910 issued rwts: total=1500,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.910 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:48.910 filename0: (groupid=0, jobs=1): err= 0: pid=101700: Sun Jul 14 10:18:19 2024 00:25:48.910 read: IOPS=271, BW=33.9MiB/s (35.6MB/s)(170MiB/5006msec) 00:25:48.911 slat (nsec): min=6629, max=39277, avg=11040.41, stdev=4036.58 00:25:48.911 clat (usec): min=4740, max=53168, avg=11040.58, stdev=4435.16 00:25:48.911 lat (usec): min=4750, max=53182, avg=11051.62, stdev=4435.34 00:25:48.911 clat percentiles (usec): 00:25:48.911 | 1.00th=[ 6259], 5.00th=[ 7504], 10.00th=[ 9110], 20.00th=[ 9896], 00:25:48.911 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[11076], 00:25:48.911 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12256], 00:25:48.911 | 99.00th=[50594], 99.50th=[51119], 99.90th=[52691], 99.95th=[53216], 00:25:48.911 | 99.99th=[53216] 00:25:48.911 bw ( KiB/s): min=28729, max=38144, per=33.89%, avg=34693.70, stdev=2906.45, samples=10 00:25:48.911 iops : min= 224, max= 298, avg=271.00, stdev=22.81, samples=10 00:25:48.911 lat (msec) : 10=21.87%, 20=77.03%, 50=0.07%, 100=1.03% 00:25:48.911 cpu : usr=92.45%, sys=6.09%, ctx=9, majf=0, minf=0 00:25:48.911 IO depths : 1=2.9%, 2=97.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:25:48.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:48.911 issued rwts: total=1358,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:48.911 latency : target=0, window=0, percentile=100.00%, depth=3 00:25:48.911 00:25:48.911 Run status group 0 (all jobs): 00:25:48.911 READ: bw=100.0MiB/s (105MB/s), 28.6MiB/s-37.5MiB/s (30.0MB/s-39.3MB/s), io=501MiB (525MB), run=5005-5006msec 00:25:48.911 10:18:19 -- target/dif.sh@107 -- # destroy_subsystems 0 00:25:48.911 10:18:19 -- target/dif.sh@43 -- # local sub 00:25:48.911 10:18:19 -- target/dif.sh@45 -- # for sub in "$@" 00:25:48.911 10:18:19 -- target/dif.sh@46 -- # destroy_subsystem 0 00:25:48.911 10:18:19 -- target/dif.sh@36 -- # local sub_id=0 00:25:48.911 10:18:19 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:25:48.911 10:18:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.911 10:18:19 -- common/autotest_common.sh@10 -- # set +x 00:25:48.911 10:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:48.911 10:18:19 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:25:48.911 10:18:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.911 10:18:19 -- common/autotest_common.sh@10 -- # set +x 00:25:48.911 10:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:48.911 10:18:19 -- target/dif.sh@109 -- # NULL_DIF=2 00:25:48.911 10:18:19 -- target/dif.sh@109 -- # bs=4k 00:25:48.911 10:18:19 -- target/dif.sh@109 -- # numjobs=8 00:25:48.911 10:18:19 -- target/dif.sh@109 -- # iodepth=16 00:25:48.911 10:18:19 -- target/dif.sh@109 -- # runtime= 00:25:48.911 10:18:19 -- target/dif.sh@109 -- # files=2 00:25:48.911 10:18:19 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:25:48.911 10:18:19 -- target/dif.sh@28 -- # local sub 00:25:48.911 10:18:19 -- target/dif.sh@30 -- # for sub in "$@" 00:25:48.911 10:18:19 -- target/dif.sh@31 -- # create_subsystem 0 00:25:48.911 10:18:19 -- target/dif.sh@18 -- # local sub_id=0 00:25:48.911 10:18:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:25:48.911 10:18:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.911 10:18:19 -- common/autotest_common.sh@10 -- # set +x 00:25:48.911 bdev_null0 00:25:48.911 10:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:48.911 10:18:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:48.911 10:18:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.911 10:18:19 -- common/autotest_common.sh@10 -- # set +x 00:25:48.911 10:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:48.911 10:18:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:48.911 10:18:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.911 10:18:19 -- common/autotest_common.sh@10 -- # set +x 00:25:48.911 10:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:48.911 10:18:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:48.911 10:18:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.911 10:18:19 -- common/autotest_common.sh@10 -- # set +x 00:25:48.911 [2024-07-14 10:18:19.394386] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:48.911 10:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:48.911 10:18:19 -- target/dif.sh@30 -- # for sub in "$@" 00:25:48.911 10:18:19 -- target/dif.sh@31 -- # create_subsystem 1 00:25:48.911 10:18:19 -- target/dif.sh@18 -- # local sub_id=1 00:25:48.911 10:18:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:25:48.911 10:18:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.911 10:18:19 -- common/autotest_common.sh@10 -- # set +x 00:25:48.911 bdev_null1 00:25:48.911 10:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:48.911 10:18:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:25:48.911 10:18:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.911 10:18:19 -- common/autotest_common.sh@10 -- # set +x 00:25:48.911 10:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:48.911 10:18:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:25:48.911 10:18:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.911 10:18:19 -- common/autotest_common.sh@10 -- # set +x 00:25:48.911 10:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:48.911 10:18:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:48.911 10:18:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.911 10:18:19 -- common/autotest_common.sh@10 -- # set +x 00:25:48.911 10:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:48.911 10:18:19 -- target/dif.sh@30 -- # for sub in "$@" 00:25:48.911 10:18:19 -- target/dif.sh@31 -- # create_subsystem 2 00:25:48.911 10:18:19 -- target/dif.sh@18 -- # local sub_id=2 00:25:48.911 10:18:19 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:25:48.911 10:18:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.911 10:18:19 -- common/autotest_common.sh@10 -- # set +x 00:25:48.911 bdev_null2 00:25:48.911 10:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:48.911 10:18:19 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:25:48.911 10:18:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.911 10:18:19 -- common/autotest_common.sh@10 -- # set +x 00:25:48.911 10:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:48.911 10:18:19 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:25:48.911 10:18:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:48.911 10:18:19 -- common/autotest_common.sh@10 -- # set +x 00:25:49.170 10:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:49.170 10:18:19 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:49.170 10:18:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:25:49.170 10:18:19 -- common/autotest_common.sh@10 -- # set +x 00:25:49.170 10:18:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:25:49.170 10:18:19 -- target/dif.sh@112 -- # fio /dev/fd/62 00:25:49.170 10:18:19 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:25:49.170 10:18:19 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:25:49.170 10:18:19 -- nvmf/common.sh@520 -- # config=() 00:25:49.170 10:18:19 -- nvmf/common.sh@520 -- # local subsystem config 00:25:49.170 10:18:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:49.170 10:18:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:49.170 { 00:25:49.170 "params": { 00:25:49.170 "name": "Nvme$subsystem", 00:25:49.170 "trtype": "$TEST_TRANSPORT", 00:25:49.170 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:49.170 "adrfam": "ipv4", 00:25:49.170 "trsvcid": "$NVMF_PORT", 00:25:49.170 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:49.170 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:49.170 "hdgst": ${hdgst:-false}, 00:25:49.170 "ddgst": ${ddgst:-false} 00:25:49.170 }, 00:25:49.170 "method": "bdev_nvme_attach_controller" 00:25:49.170 } 00:25:49.170 EOF 00:25:49.170 )") 00:25:49.170 10:18:19 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:49.170 10:18:19 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:49.170 10:18:19 -- target/dif.sh@82 -- # gen_fio_conf 00:25:49.170 10:18:19 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:25:49.170 10:18:19 -- target/dif.sh@54 -- # local file 00:25:49.170 10:18:19 -- target/dif.sh@56 -- # cat 00:25:49.171 10:18:19 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:49.171 10:18:19 -- common/autotest_common.sh@1318 -- # local sanitizers 00:25:49.171 10:18:19 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:49.171 10:18:19 -- common/autotest_common.sh@1320 -- # shift 00:25:49.171 10:18:19 -- nvmf/common.sh@542 -- # cat 00:25:49.171 10:18:19 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:25:49.171 10:18:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:49.171 10:18:19 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:49.171 10:18:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:49.171 10:18:19 -- common/autotest_common.sh@1324 -- # grep libasan 00:25:49.171 10:18:19 -- target/dif.sh@72 -- # (( file = 1 )) 00:25:49.171 10:18:19 -- target/dif.sh@72 -- # (( file <= files )) 00:25:49.171 10:18:19 -- target/dif.sh@73 -- # cat 00:25:49.171 10:18:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:49.171 10:18:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:49.171 { 00:25:49.171 "params": { 00:25:49.171 "name": "Nvme$subsystem", 00:25:49.171 "trtype": "$TEST_TRANSPORT", 00:25:49.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:49.171 "adrfam": "ipv4", 00:25:49.171 "trsvcid": "$NVMF_PORT", 00:25:49.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:49.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:49.171 "hdgst": ${hdgst:-false}, 00:25:49.171 "ddgst": ${ddgst:-false} 00:25:49.171 }, 00:25:49.171 "method": "bdev_nvme_attach_controller" 00:25:49.171 } 00:25:49.171 EOF 00:25:49.171 )") 00:25:49.171 10:18:19 -- nvmf/common.sh@542 -- # cat 00:25:49.171 10:18:19 -- target/dif.sh@72 -- # (( file++ )) 00:25:49.171 10:18:19 -- target/dif.sh@72 -- # (( file <= files )) 00:25:49.171 10:18:19 -- target/dif.sh@73 -- # cat 00:25:49.171 10:18:19 -- target/dif.sh@72 -- # (( file++ )) 00:25:49.171 10:18:19 -- target/dif.sh@72 -- # (( file <= files )) 00:25:49.171 10:18:19 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:25:49.171 10:18:19 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:25:49.171 { 00:25:49.171 "params": { 00:25:49.171 "name": "Nvme$subsystem", 00:25:49.171 "trtype": "$TEST_TRANSPORT", 00:25:49.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:49.171 "adrfam": "ipv4", 00:25:49.171 "trsvcid": "$NVMF_PORT", 00:25:49.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:49.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:49.171 "hdgst": ${hdgst:-false}, 00:25:49.171 "ddgst": ${ddgst:-false} 00:25:49.171 }, 00:25:49.171 "method": "bdev_nvme_attach_controller" 00:25:49.171 } 00:25:49.171 EOF 00:25:49.171 )") 00:25:49.171 10:18:19 -- nvmf/common.sh@542 -- # cat 00:25:49.171 10:18:19 -- nvmf/common.sh@544 -- # jq . 00:25:49.171 10:18:19 -- nvmf/common.sh@545 -- # IFS=, 00:25:49.171 10:18:19 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:25:49.171 "params": { 00:25:49.171 "name": "Nvme0", 00:25:49.171 "trtype": "tcp", 00:25:49.171 "traddr": "10.0.0.2", 00:25:49.171 "adrfam": "ipv4", 00:25:49.171 "trsvcid": "4420", 00:25:49.171 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:49.171 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:49.171 "hdgst": false, 00:25:49.171 "ddgst": false 00:25:49.171 }, 00:25:49.171 "method": "bdev_nvme_attach_controller" 00:25:49.171 },{ 00:25:49.171 "params": { 00:25:49.171 "name": "Nvme1", 00:25:49.171 "trtype": "tcp", 00:25:49.171 "traddr": "10.0.0.2", 00:25:49.171 "adrfam": "ipv4", 00:25:49.171 "trsvcid": "4420", 00:25:49.171 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:49.171 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:49.171 "hdgst": false, 00:25:49.171 "ddgst": false 00:25:49.171 }, 00:25:49.171 "method": "bdev_nvme_attach_controller" 00:25:49.171 },{ 00:25:49.171 "params": { 00:25:49.171 "name": "Nvme2", 00:25:49.171 "trtype": "tcp", 00:25:49.171 "traddr": "10.0.0.2", 00:25:49.171 "adrfam": "ipv4", 00:25:49.171 "trsvcid": "4420", 00:25:49.171 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:49.171 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:49.171 "hdgst": false, 00:25:49.171 "ddgst": false 00:25:49.171 }, 00:25:49.171 "method": "bdev_nvme_attach_controller" 00:25:49.171 }' 00:25:49.171 10:18:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:49.171 10:18:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:49.171 10:18:19 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:25:49.171 10:18:19 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:25:49.171 10:18:19 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:25:49.171 10:18:19 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:25:49.171 10:18:19 -- common/autotest_common.sh@1324 -- # asan_lib= 00:25:49.171 10:18:19 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:25:49.171 10:18:19 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:25:49.171 10:18:19 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:49.171 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:49.171 ... 00:25:49.171 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:49.171 ... 00:25:49.171 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:25:49.171 ... 00:25:49.171 fio-3.35 00:25:49.171 Starting 24 threads 00:25:50.107 [2024-07-14 10:18:20.317303] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:25:50.107 [2024-07-14 10:18:20.317384] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:00.107 00:26:00.107 filename0: (groupid=0, jobs=1): err= 0: pid=101795: Sun Jul 14 10:18:30 2024 00:26:00.107 read: IOPS=278, BW=1114KiB/s (1141kB/s)(10.9MiB/10054msec) 00:26:00.107 slat (usec): min=3, max=6018, avg=17.52, stdev=173.66 00:26:00.107 clat (msec): min=15, max=142, avg=57.34, stdev=18.80 00:26:00.107 lat (msec): min=15, max=142, avg=57.35, stdev=18.81 00:26:00.107 clat percentiles (msec): 00:26:00.107 | 1.00th=[ 17], 5.00th=[ 33], 10.00th=[ 38], 20.00th=[ 42], 00:26:00.107 | 30.00th=[ 47], 40.00th=[ 48], 50.00th=[ 55], 60.00th=[ 61], 00:26:00.107 | 70.00th=[ 65], 80.00th=[ 72], 90.00th=[ 85], 95.00th=[ 95], 00:26:00.107 | 99.00th=[ 103], 99.50th=[ 113], 99.90th=[ 142], 99.95th=[ 142], 00:26:00.107 | 99.99th=[ 142] 00:26:00.107 bw ( KiB/s): min= 896, max= 1536, per=4.67%, avg=1113.60, stdev=170.67, samples=20 00:26:00.107 iops : min= 224, max= 384, avg=278.40, stdev=42.67, samples=20 00:26:00.107 lat (msec) : 20=1.14%, 50=45.11%, 100=52.29%, 250=1.46% 00:26:00.107 cpu : usr=43.63%, sys=0.94%, ctx=1330, majf=0, minf=9 00:26:00.107 IO depths : 1=1.4%, 2=2.8%, 4=9.4%, 8=74.5%, 16=11.9%, 32=0.0%, >=64=0.0% 00:26:00.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.107 complete : 0=0.0%, 4=90.0%, 8=5.2%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.108 issued rwts: total=2800,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.108 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:00.108 filename0: (groupid=0, jobs=1): err= 0: pid=101796: Sun Jul 14 10:18:30 2024 00:26:00.108 read: IOPS=235, BW=942KiB/s (965kB/s)(9424KiB/10004msec) 00:26:00.108 slat (usec): min=3, max=8020, avg=20.14, stdev=222.13 00:26:00.108 clat (msec): min=21, max=160, avg=67.80, stdev=19.58 00:26:00.108 lat (msec): min=21, max=160, avg=67.82, stdev=19.58 00:26:00.108 clat percentiles (msec): 00:26:00.108 | 1.00th=[ 30], 5.00th=[ 37], 10.00th=[ 47], 20.00th=[ 55], 00:26:00.108 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 66], 60.00th=[ 69], 00:26:00.108 | 70.00th=[ 72], 80.00th=[ 82], 90.00th=[ 94], 95.00th=[ 105], 00:26:00.108 | 99.00th=[ 131], 99.50th=[ 136], 99.90th=[ 161], 99.95th=[ 161], 00:26:00.108 | 99.99th=[ 161] 00:26:00.108 bw ( KiB/s): min= 728, max= 1280, per=3.96%, avg=944.53, stdev=142.90, samples=19 00:26:00.108 iops : min= 182, max= 320, avg=236.11, stdev=35.74, samples=19 00:26:00.108 lat (msec) : 50=16.55%, 100=75.21%, 250=8.23% 00:26:00.108 cpu : usr=43.98%, sys=0.94%, ctx=1368, majf=0, minf=9 00:26:00.108 IO depths : 1=2.5%, 2=6.0%, 4=15.9%, 8=64.9%, 16=10.7%, 32=0.0%, >=64=0.0% 00:26:00.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.108 complete : 0=0.0%, 4=91.8%, 8=3.3%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.108 issued rwts: total=2356,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.108 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:00.108 filename0: (groupid=0, jobs=1): err= 0: pid=101797: Sun Jul 14 10:18:30 2024 00:26:00.108 read: IOPS=228, BW=915KiB/s (937kB/s)(9164KiB/10020msec) 00:26:00.108 slat (usec): min=4, max=4021, avg=12.24, stdev=83.90 00:26:00.108 clat (msec): min=21, max=145, avg=69.90, stdev=20.64 00:26:00.108 lat (msec): min=21, max=145, avg=69.91, stdev=20.64 00:26:00.108 clat percentiles (msec): 00:26:00.108 | 1.00th=[ 23], 5.00th=[ 36], 10.00th=[ 48], 20.00th=[ 58], 00:26:00.108 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 70], 60.00th=[ 72], 00:26:00.108 | 70.00th=[ 81], 80.00th=[ 86], 90.00th=[ 96], 95.00th=[ 108], 00:26:00.108 | 99.00th=[ 121], 99.50th=[ 121], 99.90th=[ 146], 99.95th=[ 146], 00:26:00.108 | 99.99th=[ 146] 00:26:00.108 bw ( KiB/s): min= 640, max= 1464, per=3.82%, avg=910.74, stdev=169.84, samples=19 00:26:00.108 iops : min= 160, max= 366, avg=227.68, stdev=42.46, samples=19 00:26:00.108 lat (msec) : 50=16.85%, 100=76.21%, 250=6.94% 00:26:00.108 cpu : usr=32.54%, sys=0.65%, ctx=848, majf=0, minf=9 00:26:00.108 IO depths : 1=1.6%, 2=3.8%, 4=13.4%, 8=69.6%, 16=11.6%, 32=0.0%, >=64=0.0% 00:26:00.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.108 complete : 0=0.0%, 4=91.0%, 8=4.0%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.108 issued rwts: total=2291,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.108 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:00.108 filename0: (groupid=0, jobs=1): err= 0: pid=101798: Sun Jul 14 10:18:30 2024 00:26:00.108 read: IOPS=217, BW=871KiB/s (892kB/s)(8712KiB/10003msec) 00:26:00.108 slat (usec): min=4, max=8021, avg=22.23, stdev=297.13 00:26:00.108 clat (msec): min=6, max=143, avg=73.37, stdev=22.25 00:26:00.108 lat (msec): min=6, max=143, avg=73.39, stdev=22.25 00:26:00.108 clat percentiles (msec): 00:26:00.108 | 1.00th=[ 29], 5.00th=[ 37], 10.00th=[ 48], 20.00th=[ 61], 00:26:00.108 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 74], 00:26:00.108 | 70.00th=[ 85], 80.00th=[ 93], 90.00th=[ 100], 95.00th=[ 112], 00:26:00.108 | 99.00th=[ 142], 99.50th=[ 142], 99.90th=[ 144], 99.95th=[ 144], 00:26:00.108 | 99.99th=[ 144] 00:26:00.108 bw ( KiB/s): min= 640, max= 1096, per=3.62%, avg=864.84, stdev=115.53, samples=19 00:26:00.108 iops : min= 160, max= 274, avg=216.21, stdev=28.88, samples=19 00:26:00.108 lat (msec) : 10=0.46%, 50=15.29%, 100=74.47%, 250=9.78% 00:26:00.108 cpu : usr=32.52%, sys=0.63%, ctx=862, majf=0, minf=9 00:26:00.108 IO depths : 1=1.9%, 2=4.5%, 4=13.9%, 8=68.5%, 16=11.2%, 32=0.0%, >=64=0.0% 00:26:00.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.108 complete : 0=0.0%, 4=91.0%, 8=4.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.108 issued rwts: total=2178,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.108 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:00.108 filename0: (groupid=0, jobs=1): err= 0: pid=101799: Sun Jul 14 10:18:30 2024 00:26:00.108 read: IOPS=279, BW=1119KiB/s (1146kB/s)(11.0MiB/10049msec) 00:26:00.108 slat (usec): min=3, max=8069, avg=19.40, stdev=214.98 00:26:00.108 clat (msec): min=5, max=119, avg=57.00, stdev=19.05 00:26:00.108 lat (msec): min=6, max=120, avg=57.02, stdev=19.06 00:26:00.108 clat percentiles (msec): 00:26:00.108 | 1.00th=[ 15], 5.00th=[ 29], 10.00th=[ 36], 20.00th=[ 42], 00:26:00.108 | 30.00th=[ 47], 40.00th=[ 49], 50.00th=[ 56], 60.00th=[ 61], 00:26:00.108 | 70.00th=[ 67], 80.00th=[ 72], 90.00th=[ 84], 95.00th=[ 90], 00:26:00.108 | 99.00th=[ 107], 99.50th=[ 115], 99.90th=[ 121], 99.95th=[ 121], 00:26:00.108 | 99.99th=[ 121] 00:26:00.108 bw ( KiB/s): min= 832, max= 1408, per=4.69%, avg=1118.40, stdev=157.70, samples=20 00:26:00.108 iops : min= 208, max= 352, avg=279.60, stdev=39.43, samples=20 00:26:00.108 lat (msec) : 10=0.57%, 20=1.67%, 50=41.50%, 100=54.84%, 250=1.42% 00:26:00.108 cpu : usr=40.78%, sys=0.88%, ctx=1301, majf=0, minf=9 00:26:00.108 IO depths : 1=0.4%, 2=0.9%, 4=5.9%, 8=78.9%, 16=13.9%, 32=0.0%, >=64=0.0% 00:26:00.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.108 complete : 0=0.0%, 4=89.3%, 8=6.9%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.108 issued rwts: total=2812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.108 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:00.108 filename0: (groupid=0, jobs=1): err= 0: pid=101800: Sun Jul 14 10:18:30 2024 00:26:00.108 read: IOPS=227, BW=912KiB/s (934kB/s)(9124KiB/10007msec) 00:26:00.108 slat (usec): min=4, max=8021, avg=23.19, stdev=232.43 00:26:00.108 clat (msec): min=17, max=143, avg=70.02, stdev=21.12 00:26:00.108 lat (msec): min=17, max=143, avg=70.04, stdev=21.13 00:26:00.108 clat percentiles (msec): 00:26:00.108 | 1.00th=[ 23], 5.00th=[ 38], 10.00th=[ 44], 20.00th=[ 57], 00:26:00.108 | 30.00th=[ 64], 40.00th=[ 65], 50.00th=[ 67], 60.00th=[ 70], 00:26:00.108 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 99], 95.00th=[ 107], 00:26:00.108 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:26:00.108 | 99.99th=[ 144] 00:26:00.108 bw ( KiB/s): min= 640, max= 1312, per=3.82%, avg=910.00, stdev=151.68, samples=19 00:26:00.108 iops : min= 160, max= 328, avg=227.47, stdev=37.93, samples=19 00:26:00.108 lat (msec) : 20=0.31%, 50=13.94%, 100=77.69%, 250=8.07% 00:26:00.108 cpu : usr=42.70%, sys=0.77%, ctx=1430, majf=0, minf=9 00:26:00.108 IO depths : 1=2.3%, 2=5.5%, 4=14.8%, 8=66.2%, 16=11.2%, 32=0.0%, >=64=0.0% 00:26:00.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.108 complete : 0=0.0%, 4=91.6%, 8=3.7%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.108 issued rwts: total=2281,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.108 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:00.108 filename0: (groupid=0, jobs=1): err= 0: pid=101801: Sun Jul 14 10:18:30 2024 00:26:00.108 read: IOPS=222, BW=890KiB/s (912kB/s)(8908KiB/10007msec) 00:26:00.108 slat (usec): min=4, max=8031, avg=25.89, stdev=339.31 00:26:00.108 clat (msec): min=13, max=145, avg=71.65, stdev=21.49 00:26:00.108 lat (msec): min=13, max=145, avg=71.68, stdev=21.50 00:26:00.108 clat percentiles (msec): 00:26:00.108 | 1.00th=[ 23], 5.00th=[ 37], 10.00th=[ 48], 20.00th=[ 60], 00:26:00.108 | 30.00th=[ 62], 40.00th=[ 64], 50.00th=[ 71], 60.00th=[ 72], 00:26:00.108 | 70.00th=[ 84], 80.00th=[ 86], 90.00th=[ 97], 95.00th=[ 109], 00:26:00.108 | 99.00th=[ 132], 99.50th=[ 142], 99.90th=[ 146], 99.95th=[ 146], 00:26:00.108 | 99.99th=[ 146] 00:26:00.108 bw ( KiB/s): min= 624, max= 1200, per=3.70%, avg=883.05, stdev=144.55, samples=19 00:26:00.108 iops : min= 156, max= 300, avg=220.74, stdev=36.14, samples=19 00:26:00.108 lat (msec) : 20=0.45%, 50=14.82%, 100=75.30%, 250=9.43% 00:26:00.108 cpu : usr=34.38%, sys=0.75%, ctx=921, majf=0, minf=9 00:26:00.108 IO depths : 1=2.0%, 2=4.8%, 4=14.4%, 8=67.8%, 16=11.0%, 32=0.0%, >=64=0.0% 00:26:00.108 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.108 complete : 0=0.0%, 4=91.1%, 8=3.7%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.108 issued rwts: total=2227,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.108 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:00.108 filename0: (groupid=0, jobs=1): err= 0: pid=101802: Sun Jul 14 10:18:30 2024 00:26:00.108 read: IOPS=228, BW=914KiB/s (936kB/s)(9164KiB/10025msec) 00:26:00.108 slat (usec): min=5, max=8021, avg=21.42, stdev=289.73 00:26:00.108 clat (msec): min=20, max=143, avg=69.86, stdev=19.48 00:26:00.108 lat (msec): min=20, max=143, avg=69.89, stdev=19.47 00:26:00.108 clat percentiles (msec): 00:26:00.108 | 1.00th=[ 33], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 58], 00:26:00.108 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 69], 60.00th=[ 72], 00:26:00.108 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 97], 95.00th=[ 107], 00:26:00.109 | 99.00th=[ 121], 99.50th=[ 126], 99.90th=[ 144], 99.95th=[ 144], 00:26:00.109 | 99.99th=[ 144] 00:26:00.109 bw ( KiB/s): min= 728, max= 1280, per=3.83%, avg=912.32, stdev=153.07, samples=19 00:26:00.109 iops : min= 182, max= 320, avg=228.05, stdev=38.29, samples=19 00:26:00.109 lat (msec) : 50=16.76%, 100=74.73%, 250=8.51% 00:26:00.109 cpu : usr=35.07%, sys=0.75%, ctx=961, majf=0, minf=9 00:26:00.109 IO depths : 1=1.5%, 2=3.4%, 4=12.7%, 8=70.7%, 16=11.8%, 32=0.0%, >=64=0.0% 00:26:00.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.109 complete : 0=0.0%, 4=90.4%, 8=4.7%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.109 issued rwts: total=2291,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.109 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:00.109 filename1: (groupid=0, jobs=1): err= 0: pid=101803: Sun Jul 14 10:18:30 2024 00:26:00.109 read: IOPS=301, BW=1204KiB/s (1233kB/s)(11.8MiB/10059msec) 00:26:00.109 slat (usec): min=3, max=4024, avg=15.84, stdev=127.64 00:26:00.109 clat (msec): min=3, max=131, avg=53.05, stdev=18.28 00:26:00.109 lat (msec): min=3, max=131, avg=53.06, stdev=18.28 00:26:00.109 clat percentiles (msec): 00:26:00.109 | 1.00th=[ 5], 5.00th=[ 26], 10.00th=[ 36], 20.00th=[ 40], 00:26:00.109 | 30.00th=[ 44], 40.00th=[ 47], 50.00th=[ 50], 60.00th=[ 56], 00:26:00.109 | 70.00th=[ 62], 80.00th=[ 67], 90.00th=[ 75], 95.00th=[ 86], 00:26:00.109 | 99.00th=[ 101], 99.50th=[ 108], 99.90th=[ 132], 99.95th=[ 132], 00:26:00.109 | 99.99th=[ 132] 00:26:00.109 bw ( KiB/s): min= 688, max= 1876, per=5.05%, avg=1204.60, stdev=253.51, samples=20 00:26:00.109 iops : min= 172, max= 469, avg=301.15, stdev=63.38, samples=20 00:26:00.109 lat (msec) : 4=0.53%, 10=1.58%, 20=0.86%, 50=48.43%, 100=47.28% 00:26:00.109 lat (msec) : 250=1.32% 00:26:00.109 cpu : usr=45.15%, sys=0.91%, ctx=1685, majf=0, minf=0 00:26:00.109 IO depths : 1=0.9%, 2=1.9%, 4=8.5%, 8=76.1%, 16=12.6%, 32=0.0%, >=64=0.0% 00:26:00.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.109 complete : 0=0.0%, 4=89.7%, 8=5.7%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.109 issued rwts: total=3029,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.109 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:00.109 filename1: (groupid=0, jobs=1): err= 0: pid=101804: Sun Jul 14 10:18:30 2024 00:26:00.109 read: IOPS=223, BW=893KiB/s (915kB/s)(8956KiB/10024msec) 00:26:00.109 slat (usec): min=4, max=8020, avg=20.84, stdev=293.10 00:26:00.109 clat (msec): min=23, max=134, avg=71.37, stdev=19.20 00:26:00.109 lat (msec): min=23, max=134, avg=71.39, stdev=19.20 00:26:00.109 clat percentiles (msec): 00:26:00.109 | 1.00th=[ 27], 5.00th=[ 37], 10.00th=[ 48], 20.00th=[ 61], 00:26:00.109 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 73], 00:26:00.109 | 70.00th=[ 84], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 106], 00:26:00.109 | 99.00th=[ 123], 99.50th=[ 132], 99.90th=[ 136], 99.95th=[ 136], 00:26:00.109 | 99.99th=[ 136] 00:26:00.109 bw ( KiB/s): min= 768, max= 1176, per=3.78%, avg=901.89, stdev=108.18, samples=19 00:26:00.109 iops : min= 192, max= 294, avg=225.42, stdev=27.00, samples=19 00:26:00.109 lat (msec) : 50=16.39%, 100=78.16%, 250=5.45% 00:26:00.109 cpu : usr=32.43%, sys=0.71%, ctx=846, majf=0, minf=9 00:26:00.109 IO depths : 1=2.1%, 2=4.9%, 4=14.0%, 8=67.8%, 16=11.2%, 32=0.0%, >=64=0.0% 00:26:00.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.109 complete : 0=0.0%, 4=91.3%, 8=3.7%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.109 issued rwts: total=2239,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.109 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:00.109 filename1: (groupid=0, jobs=1): err= 0: pid=101805: Sun Jul 14 10:18:30 2024 00:26:00.109 read: IOPS=238, BW=955KiB/s (978kB/s)(9576KiB/10024msec) 00:26:00.109 slat (usec): min=4, max=8022, avg=17.07, stdev=231.53 00:26:00.109 clat (msec): min=20, max=132, avg=66.88, stdev=18.97 00:26:00.109 lat (msec): min=20, max=132, avg=66.90, stdev=18.97 00:26:00.109 clat percentiles (msec): 00:26:00.109 | 1.00th=[ 24], 5.00th=[ 37], 10.00th=[ 46], 20.00th=[ 48], 00:26:00.109 | 30.00th=[ 59], 40.00th=[ 61], 50.00th=[ 64], 60.00th=[ 71], 00:26:00.109 | 70.00th=[ 75], 80.00th=[ 85], 90.00th=[ 95], 95.00th=[ 96], 00:26:00.109 | 99.00th=[ 111], 99.50th=[ 121], 99.90th=[ 133], 99.95th=[ 133], 00:26:00.109 | 99.99th=[ 133] 00:26:00.109 bw ( KiB/s): min= 768, max= 1328, per=3.98%, avg=950.95, stdev=136.48, samples=20 00:26:00.109 iops : min= 192, max= 332, avg=237.70, stdev=34.14, samples=20 00:26:00.109 lat (msec) : 50=23.18%, 100=73.81%, 250=3.01% 00:26:00.109 cpu : usr=32.85%, sys=0.64%, ctx=880, majf=0, minf=10 00:26:00.109 IO depths : 1=1.3%, 2=3.3%, 4=10.9%, 8=72.1%, 16=12.5%, 32=0.0%, >=64=0.0% 00:26:00.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.109 complete : 0=0.0%, 4=90.5%, 8=5.2%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.109 issued rwts: total=2394,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.109 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:00.109 filename1: (groupid=0, jobs=1): err= 0: pid=101806: Sun Jul 14 10:18:30 2024 00:26:00.109 read: IOPS=253, BW=1014KiB/s (1039kB/s)(9.94MiB/10033msec) 00:26:00.109 slat (usec): min=5, max=8023, avg=17.20, stdev=224.58 00:26:00.109 clat (msec): min=20, max=146, avg=62.89, stdev=20.55 00:26:00.109 lat (msec): min=20, max=146, avg=62.91, stdev=20.55 00:26:00.109 clat percentiles (msec): 00:26:00.109 | 1.00th=[ 23], 5.00th=[ 35], 10.00th=[ 38], 20.00th=[ 47], 00:26:00.109 | 30.00th=[ 49], 40.00th=[ 60], 50.00th=[ 61], 60.00th=[ 68], 00:26:00.109 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 86], 95.00th=[ 96], 00:26:00.109 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 148], 99.95th=[ 148], 00:26:00.109 | 99.99th=[ 148] 00:26:00.109 bw ( KiB/s): min= 736, max= 1277, per=4.24%, avg=1011.05, stdev=136.70, samples=20 00:26:00.109 iops : min= 184, max= 319, avg=252.75, stdev=34.15, samples=20 00:26:00.109 lat (msec) : 50=32.43%, 100=64.19%, 250=3.38% 00:26:00.109 cpu : usr=32.88%, sys=0.77%, ctx=897, majf=0, minf=9 00:26:00.109 IO depths : 1=0.6%, 2=1.4%, 4=8.3%, 8=76.7%, 16=13.0%, 32=0.0%, >=64=0.0% 00:26:00.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.109 complete : 0=0.0%, 4=89.6%, 8=5.9%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.109 issued rwts: total=2544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.109 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:00.109 filename1: (groupid=0, jobs=1): err= 0: pid=101807: Sun Jul 14 10:18:30 2024 00:26:00.109 read: IOPS=232, BW=929KiB/s (951kB/s)(9308KiB/10022msec) 00:26:00.109 slat (nsec): min=4849, max=63977, avg=11125.73, stdev=4620.93 00:26:00.109 clat (msec): min=17, max=148, avg=68.81, stdev=21.11 00:26:00.109 lat (msec): min=17, max=148, avg=68.82, stdev=21.11 00:26:00.109 clat percentiles (msec): 00:26:00.109 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 56], 00:26:00.109 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 66], 60.00th=[ 71], 00:26:00.109 | 70.00th=[ 77], 80.00th=[ 85], 90.00th=[ 95], 95.00th=[ 108], 00:26:00.109 | 99.00th=[ 134], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:26:00.109 | 99.99th=[ 148] 00:26:00.109 bw ( KiB/s): min= 640, max= 1456, per=3.88%, avg=925.84, stdev=190.23, samples=19 00:26:00.109 iops : min= 160, max= 364, avg=231.42, stdev=47.57, samples=19 00:26:00.109 lat (msec) : 20=0.26%, 50=17.62%, 100=75.16%, 250=6.96% 00:26:00.109 cpu : usr=41.07%, sys=0.95%, ctx=1293, majf=0, minf=9 00:26:00.109 IO depths : 1=1.9%, 2=4.2%, 4=11.6%, 8=70.3%, 16=12.0%, 32=0.0%, >=64=0.0% 00:26:00.109 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.109 complete : 0=0.0%, 4=90.8%, 8=4.9%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.109 issued rwts: total=2327,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.109 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:00.109 filename1: (groupid=0, jobs=1): err= 0: pid=101808: Sun Jul 14 10:18:30 2024 00:26:00.109 read: IOPS=235, BW=941KiB/s (963kB/s)(9412KiB/10006msec) 00:26:00.109 slat (usec): min=3, max=4016, avg=12.80, stdev=82.70 00:26:00.109 clat (msec): min=20, max=167, avg=67.94, stdev=20.89 00:26:00.109 lat (msec): min=20, max=167, avg=67.96, stdev=20.90 00:26:00.109 clat percentiles (msec): 00:26:00.109 | 1.00th=[ 27], 5.00th=[ 38], 10.00th=[ 45], 20.00th=[ 50], 00:26:00.109 | 30.00th=[ 60], 40.00th=[ 63], 50.00th=[ 66], 60.00th=[ 71], 00:26:00.109 | 70.00th=[ 75], 80.00th=[ 83], 90.00th=[ 93], 95.00th=[ 108], 00:26:00.109 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 167], 99.95th=[ 167], 00:26:00.109 | 99.99th=[ 169] 00:26:00.109 bw ( KiB/s): min= 560, max= 1152, per=3.95%, avg=942.95, stdev=151.54, samples=19 00:26:00.110 iops : min= 140, max= 288, avg=235.68, stdev=37.91, samples=19 00:26:00.110 lat (msec) : 50=20.36%, 100=72.33%, 250=7.31% 00:26:00.110 cpu : usr=38.24%, sys=0.95%, ctx=1054, majf=0, minf=9 00:26:00.110 IO depths : 1=2.3%, 2=5.0%, 4=13.6%, 8=68.2%, 16=11.0%, 32=0.0%, >=64=0.0% 00:26:00.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.110 complete : 0=0.0%, 4=91.0%, 8=4.0%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.110 issued rwts: total=2353,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.110 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:00.110 filename1: (groupid=0, jobs=1): err= 0: pid=101809: Sun Jul 14 10:18:30 2024 00:26:00.110 read: IOPS=253, BW=1013KiB/s (1038kB/s)(9.93MiB/10031msec) 00:26:00.110 slat (usec): min=4, max=8035, avg=15.32, stdev=178.01 00:26:00.110 clat (msec): min=22, max=140, avg=63.01, stdev=19.97 00:26:00.110 lat (msec): min=22, max=140, avg=63.02, stdev=19.97 00:26:00.110 clat percentiles (msec): 00:26:00.110 | 1.00th=[ 24], 5.00th=[ 32], 10.00th=[ 39], 20.00th=[ 45], 00:26:00.110 | 30.00th=[ 51], 40.00th=[ 60], 50.00th=[ 63], 60.00th=[ 68], 00:26:00.110 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 89], 95.00th=[ 100], 00:26:00.110 | 99.00th=[ 109], 99.50th=[ 121], 99.90th=[ 140], 99.95th=[ 140], 00:26:00.110 | 99.99th=[ 140] 00:26:00.110 bw ( KiB/s): min= 688, max= 1280, per=4.23%, avg=1009.90, stdev=163.10, samples=20 00:26:00.110 iops : min= 172, max= 320, avg=252.45, stdev=40.76, samples=20 00:26:00.110 lat (msec) : 50=29.91%, 100=66.51%, 250=3.58% 00:26:00.110 cpu : usr=40.83%, sys=0.85%, ctx=1269, majf=0, minf=9 00:26:00.110 IO depths : 1=1.1%, 2=2.5%, 4=9.5%, 8=73.9%, 16=12.9%, 32=0.0%, >=64=0.0% 00:26:00.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.110 complete : 0=0.0%, 4=90.2%, 8=5.6%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.110 issued rwts: total=2541,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.110 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:00.110 filename1: (groupid=0, jobs=1): err= 0: pid=101810: Sun Jul 14 10:18:30 2024 00:26:00.110 read: IOPS=231, BW=926KiB/s (949kB/s)(9296KiB/10034msec) 00:26:00.110 slat (usec): min=5, max=3997, avg=12.45, stdev=82.81 00:26:00.110 clat (msec): min=21, max=140, avg=68.96, stdev=21.15 00:26:00.110 lat (msec): min=21, max=140, avg=68.97, stdev=21.15 00:26:00.110 clat percentiles (msec): 00:26:00.110 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 41], 20.00th=[ 50], 00:26:00.110 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 69], 60.00th=[ 72], 00:26:00.110 | 70.00th=[ 83], 80.00th=[ 85], 90.00th=[ 96], 95.00th=[ 103], 00:26:00.110 | 99.00th=[ 132], 99.50th=[ 134], 99.90th=[ 142], 99.95th=[ 142], 00:26:00.110 | 99.99th=[ 142] 00:26:00.110 bw ( KiB/s): min= 616, max= 1277, per=3.87%, avg=923.05, stdev=176.47, samples=20 00:26:00.110 iops : min= 154, max= 319, avg=230.75, stdev=44.09, samples=20 00:26:00.110 lat (msec) : 50=20.70%, 100=73.92%, 250=5.38% 00:26:00.110 cpu : usr=34.51%, sys=0.72%, ctx=957, majf=0, minf=9 00:26:00.110 IO depths : 1=1.4%, 2=3.1%, 4=10.9%, 8=72.5%, 16=12.1%, 32=0.0%, >=64=0.0% 00:26:00.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.110 complete : 0=0.0%, 4=90.2%, 8=5.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.110 issued rwts: total=2324,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.110 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:00.110 filename2: (groupid=0, jobs=1): err= 0: pid=101811: Sun Jul 14 10:18:30 2024 00:26:00.110 read: IOPS=262, BW=1050KiB/s (1076kB/s)(10.3MiB/10064msec) 00:26:00.110 slat (usec): min=3, max=8033, avg=22.37, stdev=281.10 00:26:00.110 clat (msec): min=5, max=131, avg=60.77, stdev=20.30 00:26:00.110 lat (msec): min=5, max=131, avg=60.79, stdev=20.31 00:26:00.110 clat percentiles (msec): 00:26:00.110 | 1.00th=[ 7], 5.00th=[ 33], 10.00th=[ 37], 20.00th=[ 46], 00:26:00.110 | 30.00th=[ 48], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 64], 00:26:00.110 | 70.00th=[ 71], 80.00th=[ 77], 90.00th=[ 91], 95.00th=[ 96], 00:26:00.110 | 99.00th=[ 108], 99.50th=[ 112], 99.90th=[ 132], 99.95th=[ 132], 00:26:00.110 | 99.99th=[ 132] 00:26:00.110 bw ( KiB/s): min= 768, max= 1640, per=4.40%, avg=1050.80, stdev=211.38, samples=20 00:26:00.110 iops : min= 192, max= 410, avg=262.70, stdev=52.84, samples=20 00:26:00.110 lat (msec) : 10=1.93%, 20=0.15%, 50=32.88%, 100=63.49%, 250=1.55% 00:26:00.110 cpu : usr=36.27%, sys=0.81%, ctx=985, majf=0, minf=0 00:26:00.110 IO depths : 1=2.1%, 2=4.5%, 4=12.3%, 8=69.6%, 16=11.5%, 32=0.0%, >=64=0.0% 00:26:00.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.110 complete : 0=0.0%, 4=90.7%, 8=4.8%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.110 issued rwts: total=2643,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.110 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:00.110 filename2: (groupid=0, jobs=1): err= 0: pid=101812: Sun Jul 14 10:18:30 2024 00:26:00.110 read: IOPS=260, BW=1043KiB/s (1068kB/s)(10.2MiB/10046msec) 00:26:00.110 slat (usec): min=3, max=4017, avg=12.70, stdev=80.87 00:26:00.110 clat (msec): min=15, max=137, avg=61.18, stdev=20.51 00:26:00.110 lat (msec): min=15, max=137, avg=61.20, stdev=20.51 00:26:00.110 clat percentiles (msec): 00:26:00.110 | 1.00th=[ 20], 5.00th=[ 32], 10.00th=[ 39], 20.00th=[ 45], 00:26:00.110 | 30.00th=[ 48], 40.00th=[ 55], 50.00th=[ 61], 60.00th=[ 65], 00:26:00.110 | 70.00th=[ 69], 80.00th=[ 73], 90.00th=[ 91], 95.00th=[ 97], 00:26:00.110 | 99.00th=[ 115], 99.50th=[ 125], 99.90th=[ 133], 99.95th=[ 136], 00:26:00.110 | 99.99th=[ 138] 00:26:00.110 bw ( KiB/s): min= 768, max= 1776, per=4.37%, avg=1041.70, stdev=210.43, samples=20 00:26:00.110 iops : min= 192, max= 444, avg=260.40, stdev=52.61, samples=20 00:26:00.110 lat (msec) : 20=1.22%, 50=32.71%, 100=61.76%, 250=4.31% 00:26:00.110 cpu : usr=43.21%, sys=0.98%, ctx=1333, majf=0, minf=9 00:26:00.110 IO depths : 1=1.2%, 2=3.1%, 4=10.7%, 8=72.8%, 16=12.2%, 32=0.0%, >=64=0.0% 00:26:00.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.110 complete : 0=0.0%, 4=90.5%, 8=4.7%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.110 issued rwts: total=2620,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.110 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:00.110 filename2: (groupid=0, jobs=1): err= 0: pid=101813: Sun Jul 14 10:18:30 2024 00:26:00.110 read: IOPS=254, BW=1017KiB/s (1042kB/s)(9.98MiB/10051msec) 00:26:00.110 slat (usec): min=4, max=8023, avg=16.81, stdev=224.08 00:26:00.110 clat (msec): min=15, max=143, avg=62.78, stdev=19.76 00:26:00.110 lat (msec): min=15, max=143, avg=62.79, stdev=19.76 00:26:00.110 clat percentiles (msec): 00:26:00.110 | 1.00th=[ 16], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 48], 00:26:00.110 | 30.00th=[ 49], 40.00th=[ 61], 50.00th=[ 61], 60.00th=[ 68], 00:26:00.110 | 70.00th=[ 72], 80.00th=[ 83], 90.00th=[ 85], 95.00th=[ 96], 00:26:00.110 | 99.00th=[ 120], 99.50th=[ 136], 99.90th=[ 144], 99.95th=[ 144], 00:26:00.110 | 99.99th=[ 144] 00:26:00.110 bw ( KiB/s): min= 848, max= 1280, per=4.26%, avg=1016.00, stdev=122.29, samples=20 00:26:00.110 iops : min= 212, max= 320, avg=254.00, stdev=30.57, samples=20 00:26:00.110 lat (msec) : 20=1.25%, 50=30.95%, 100=65.02%, 250=2.78% 00:26:00.110 cpu : usr=32.59%, sys=0.71%, ctx=849, majf=0, minf=9 00:26:00.110 IO depths : 1=0.7%, 2=1.4%, 4=7.3%, 8=77.6%, 16=12.9%, 32=0.0%, >=64=0.0% 00:26:00.110 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.110 complete : 0=0.0%, 4=89.5%, 8=6.0%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.110 issued rwts: total=2556,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.110 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:00.110 filename2: (groupid=0, jobs=1): err= 0: pid=101814: Sun Jul 14 10:18:30 2024 00:26:00.110 read: IOPS=261, BW=1047KiB/s (1073kB/s)(10.3MiB/10047msec) 00:26:00.110 slat (nsec): min=4994, max=88635, avg=10643.84, stdev=4158.45 00:26:00.110 clat (msec): min=22, max=119, avg=61.04, stdev=18.25 00:26:00.110 lat (msec): min=22, max=119, avg=61.05, stdev=18.25 00:26:00.110 clat percentiles (msec): 00:26:00.110 | 1.00th=[ 33], 5.00th=[ 36], 10.00th=[ 37], 20.00th=[ 46], 00:26:00.110 | 30.00th=[ 48], 40.00th=[ 57], 50.00th=[ 61], 60.00th=[ 63], 00:26:00.110 | 70.00th=[ 72], 80.00th=[ 75], 90.00th=[ 85], 95.00th=[ 96], 00:26:00.110 | 99.00th=[ 109], 99.50th=[ 120], 99.90th=[ 121], 99.95th=[ 121], 00:26:00.110 | 99.99th=[ 121] 00:26:00.110 bw ( KiB/s): min= 848, max= 1328, per=4.38%, avg=1046.00, stdev=115.69, samples=20 00:26:00.110 iops : min= 212, max= 332, avg=261.50, stdev=28.92, samples=20 00:26:00.110 lat (msec) : 50=35.92%, 100=62.11%, 250=1.98% 00:26:00.110 cpu : usr=32.85%, sys=0.73%, ctx=873, majf=0, minf=9 00:26:00.110 IO depths : 1=0.8%, 2=1.6%, 4=7.1%, 8=77.3%, 16=13.2%, 32=0.0%, >=64=0.0% 00:26:00.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.111 complete : 0=0.0%, 4=89.6%, 8=6.2%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.111 issued rwts: total=2631,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.111 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:00.111 filename2: (groupid=0, jobs=1): err= 0: pid=101815: Sun Jul 14 10:18:30 2024 00:26:00.111 read: IOPS=233, BW=933KiB/s (956kB/s)(9360KiB/10030msec) 00:26:00.111 slat (usec): min=3, max=8037, avg=31.28, stdev=405.41 00:26:00.111 clat (msec): min=22, max=179, avg=68.34, stdev=21.76 00:26:00.111 lat (msec): min=22, max=179, avg=68.37, stdev=21.78 00:26:00.111 clat percentiles (msec): 00:26:00.111 | 1.00th=[ 24], 5.00th=[ 36], 10.00th=[ 40], 20.00th=[ 48], 00:26:00.111 | 30.00th=[ 60], 40.00th=[ 61], 50.00th=[ 69], 60.00th=[ 72], 00:26:00.111 | 70.00th=[ 84], 80.00th=[ 86], 90.00th=[ 96], 95.00th=[ 105], 00:26:00.111 | 99.00th=[ 123], 99.50th=[ 144], 99.90th=[ 180], 99.95th=[ 180], 00:26:00.111 | 99.99th=[ 180] 00:26:00.111 bw ( KiB/s): min= 680, max= 1472, per=3.90%, avg=929.10, stdev=194.55, samples=20 00:26:00.111 iops : min= 170, max= 368, avg=232.25, stdev=48.61, samples=20 00:26:00.111 lat (msec) : 50=22.95%, 100=71.41%, 250=5.64% 00:26:00.111 cpu : usr=33.31%, sys=0.68%, ctx=869, majf=0, minf=9 00:26:00.111 IO depths : 1=1.7%, 2=3.8%, 4=11.8%, 8=70.7%, 16=12.0%, 32=0.0%, >=64=0.0% 00:26:00.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.111 complete : 0=0.0%, 4=90.6%, 8=4.9%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.111 issued rwts: total=2340,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.111 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:00.111 filename2: (groupid=0, jobs=1): err= 0: pid=101816: Sun Jul 14 10:18:30 2024 00:26:00.111 read: IOPS=280, BW=1124KiB/s (1151kB/s)(11.0MiB/10050msec) 00:26:00.111 slat (usec): min=4, max=4023, avg=13.72, stdev=82.38 00:26:00.111 clat (msec): min=5, max=118, avg=56.74, stdev=19.51 00:26:00.111 lat (msec): min=5, max=118, avg=56.76, stdev=19.51 00:26:00.111 clat percentiles (msec): 00:26:00.111 | 1.00th=[ 11], 5.00th=[ 23], 10.00th=[ 33], 20.00th=[ 43], 00:26:00.111 | 30.00th=[ 47], 40.00th=[ 51], 50.00th=[ 58], 60.00th=[ 63], 00:26:00.111 | 70.00th=[ 67], 80.00th=[ 72], 90.00th=[ 81], 95.00th=[ 91], 00:26:00.111 | 99.00th=[ 106], 99.50th=[ 112], 99.90th=[ 118], 99.95th=[ 118], 00:26:00.111 | 99.99th=[ 118] 00:26:00.111 bw ( KiB/s): min= 688, max= 1776, per=4.71%, avg=1122.80, stdev=237.96, samples=20 00:26:00.111 iops : min= 172, max= 444, avg=280.70, stdev=59.49, samples=20 00:26:00.111 lat (msec) : 10=0.89%, 20=1.91%, 50=37.12%, 100=57.78%, 250=2.30% 00:26:00.111 cpu : usr=44.34%, sys=0.85%, ctx=1397, majf=0, minf=9 00:26:00.111 IO depths : 1=1.6%, 2=3.8%, 4=11.9%, 8=70.8%, 16=11.8%, 32=0.0%, >=64=0.0% 00:26:00.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.111 complete : 0=0.0%, 4=90.6%, 8=4.8%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.111 issued rwts: total=2823,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.111 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:00.111 filename2: (groupid=0, jobs=1): err= 0: pid=101817: Sun Jul 14 10:18:30 2024 00:26:00.111 read: IOPS=254, BW=1016KiB/s (1041kB/s)(9.97MiB/10048msec) 00:26:00.111 slat (nsec): min=5320, max=49047, avg=11088.23, stdev=4282.18 00:26:00.111 clat (msec): min=23, max=155, avg=62.83, stdev=20.23 00:26:00.111 lat (msec): min=23, max=155, avg=62.84, stdev=20.23 00:26:00.111 clat percentiles (msec): 00:26:00.111 | 1.00th=[ 26], 5.00th=[ 35], 10.00th=[ 40], 20.00th=[ 46], 00:26:00.111 | 30.00th=[ 49], 40.00th=[ 56], 50.00th=[ 62], 60.00th=[ 66], 00:26:00.111 | 70.00th=[ 72], 80.00th=[ 81], 90.00th=[ 88], 95.00th=[ 96], 00:26:00.111 | 99.00th=[ 125], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:26:00.111 | 99.99th=[ 157] 00:26:00.111 bw ( KiB/s): min= 600, max= 1296, per=4.25%, avg=1014.80, stdev=186.96, samples=20 00:26:00.111 iops : min= 150, max= 324, avg=253.70, stdev=46.74, samples=20 00:26:00.111 lat (msec) : 50=31.49%, 100=65.30%, 250=3.21% 00:26:00.111 cpu : usr=42.90%, sys=1.00%, ctx=1387, majf=0, minf=9 00:26:00.111 IO depths : 1=0.8%, 2=1.8%, 4=8.7%, 8=75.5%, 16=13.1%, 32=0.0%, >=64=0.0% 00:26:00.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.111 complete : 0=0.0%, 4=89.6%, 8=6.2%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.111 issued rwts: total=2553,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.111 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:00.111 filename2: (groupid=0, jobs=1): err= 0: pid=101818: Sun Jul 14 10:18:30 2024 00:26:00.111 read: IOPS=281, BW=1127KiB/s (1154kB/s)(11.1MiB/10057msec) 00:26:00.111 slat (usec): min=4, max=4026, avg=17.37, stdev=168.45 00:26:00.111 clat (msec): min=20, max=135, avg=56.63, stdev=16.94 00:26:00.111 lat (msec): min=20, max=135, avg=56.64, stdev=16.94 00:26:00.111 clat percentiles (msec): 00:26:00.111 | 1.00th=[ 23], 5.00th=[ 34], 10.00th=[ 37], 20.00th=[ 42], 00:26:00.111 | 30.00th=[ 46], 40.00th=[ 48], 50.00th=[ 56], 60.00th=[ 61], 00:26:00.111 | 70.00th=[ 66], 80.00th=[ 72], 90.00th=[ 81], 95.00th=[ 85], 00:26:00.111 | 99.00th=[ 96], 99.50th=[ 121], 99.90th=[ 136], 99.95th=[ 136], 00:26:00.111 | 99.99th=[ 136] 00:26:00.111 bw ( KiB/s): min= 872, max= 1453, per=4.72%, avg=1126.65, stdev=165.22, samples=20 00:26:00.111 iops : min= 218, max= 363, avg=281.65, stdev=41.28, samples=20 00:26:00.111 lat (msec) : 50=44.12%, 100=54.99%, 250=0.88% 00:26:00.111 cpu : usr=43.45%, sys=0.89%, ctx=1179, majf=0, minf=9 00:26:00.111 IO depths : 1=0.2%, 2=0.5%, 4=6.5%, 8=79.1%, 16=13.8%, 32=0.0%, >=64=0.0% 00:26:00.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.111 complete : 0=0.0%, 4=89.4%, 8=6.5%, 16=4.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.111 issued rwts: total=2833,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.111 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:00.111 00:26:00.111 Run status group 0 (all jobs): 00:26:00.111 READ: bw=23.3MiB/s (24.4MB/s), 871KiB/s-1204KiB/s (892kB/s-1233kB/s), io=234MiB (246MB), run=10003-10064msec 00:26:00.370 10:18:30 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:00.370 10:18:30 -- target/dif.sh@43 -- # local sub 00:26:00.370 10:18:30 -- target/dif.sh@45 -- # for sub in "$@" 00:26:00.370 10:18:30 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:00.370 10:18:30 -- target/dif.sh@36 -- # local sub_id=0 00:26:00.370 10:18:30 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:00.370 10:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:00.370 10:18:30 -- common/autotest_common.sh@10 -- # set +x 00:26:00.370 10:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:00.370 10:18:30 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:00.370 10:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:00.370 10:18:30 -- common/autotest_common.sh@10 -- # set +x 00:26:00.370 10:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:00.370 10:18:30 -- target/dif.sh@45 -- # for sub in "$@" 00:26:00.370 10:18:30 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:00.370 10:18:30 -- target/dif.sh@36 -- # local sub_id=1 00:26:00.370 10:18:30 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:00.370 10:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:00.370 10:18:30 -- common/autotest_common.sh@10 -- # set +x 00:26:00.370 10:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:00.370 10:18:30 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:00.370 10:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:00.370 10:18:30 -- common/autotest_common.sh@10 -- # set +x 00:26:00.370 10:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:00.370 10:18:30 -- target/dif.sh@45 -- # for sub in "$@" 00:26:00.370 10:18:30 -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:00.370 10:18:30 -- target/dif.sh@36 -- # local sub_id=2 00:26:00.370 10:18:30 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:00.370 10:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:00.370 10:18:30 -- common/autotest_common.sh@10 -- # set +x 00:26:00.370 10:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:00.370 10:18:30 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:00.370 10:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:00.370 10:18:30 -- common/autotest_common.sh@10 -- # set +x 00:26:00.370 10:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:00.370 10:18:30 -- target/dif.sh@115 -- # NULL_DIF=1 00:26:00.370 10:18:30 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:00.370 10:18:30 -- target/dif.sh@115 -- # numjobs=2 00:26:00.370 10:18:30 -- target/dif.sh@115 -- # iodepth=8 00:26:00.370 10:18:30 -- target/dif.sh@115 -- # runtime=5 00:26:00.370 10:18:30 -- target/dif.sh@115 -- # files=1 00:26:00.370 10:18:30 -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:00.370 10:18:30 -- target/dif.sh@28 -- # local sub 00:26:00.370 10:18:30 -- target/dif.sh@30 -- # for sub in "$@" 00:26:00.370 10:18:30 -- target/dif.sh@31 -- # create_subsystem 0 00:26:00.370 10:18:30 -- target/dif.sh@18 -- # local sub_id=0 00:26:00.370 10:18:30 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:00.370 10:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:00.370 10:18:30 -- common/autotest_common.sh@10 -- # set +x 00:26:00.370 bdev_null0 00:26:00.370 10:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:00.370 10:18:30 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:00.370 10:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:00.370 10:18:30 -- common/autotest_common.sh@10 -- # set +x 00:26:00.370 10:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:00.370 10:18:30 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:00.370 10:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:00.370 10:18:30 -- common/autotest_common.sh@10 -- # set +x 00:26:00.370 10:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:00.370 10:18:30 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:00.370 10:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:00.370 10:18:30 -- common/autotest_common.sh@10 -- # set +x 00:26:00.370 [2024-07-14 10:18:30.865472] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:00.370 10:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:00.370 10:18:30 -- target/dif.sh@30 -- # for sub in "$@" 00:26:00.370 10:18:30 -- target/dif.sh@31 -- # create_subsystem 1 00:26:00.370 10:18:30 -- target/dif.sh@18 -- # local sub_id=1 00:26:00.370 10:18:30 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:00.370 10:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:00.370 10:18:30 -- common/autotest_common.sh@10 -- # set +x 00:26:00.370 bdev_null1 00:26:00.370 10:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:00.371 10:18:30 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:00.371 10:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:00.371 10:18:30 -- common/autotest_common.sh@10 -- # set +x 00:26:00.371 10:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:00.371 10:18:30 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:00.371 10:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:00.371 10:18:30 -- common/autotest_common.sh@10 -- # set +x 00:26:00.371 10:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:00.371 10:18:30 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:00.371 10:18:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:00.371 10:18:30 -- common/autotest_common.sh@10 -- # set +x 00:26:00.371 10:18:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:00.371 10:18:30 -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:00.371 10:18:30 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:00.371 10:18:30 -- target/dif.sh@82 -- # gen_fio_conf 00:26:00.371 10:18:30 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:00.371 10:18:30 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:00.371 10:18:30 -- target/dif.sh@54 -- # local file 00:26:00.371 10:18:30 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:00.371 10:18:30 -- target/dif.sh@56 -- # cat 00:26:00.371 10:18:30 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:26:00.371 10:18:30 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:00.371 10:18:30 -- common/autotest_common.sh@1318 -- # local sanitizers 00:26:00.371 10:18:30 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:00.371 10:18:30 -- common/autotest_common.sh@1320 -- # shift 00:26:00.371 10:18:30 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:26:00.371 10:18:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:00.371 10:18:30 -- nvmf/common.sh@520 -- # config=() 00:26:00.371 10:18:30 -- nvmf/common.sh@520 -- # local subsystem config 00:26:00.371 10:18:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:00.371 10:18:30 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:00.371 10:18:30 -- target/dif.sh@72 -- # (( file <= files )) 00:26:00.371 10:18:30 -- target/dif.sh@73 -- # cat 00:26:00.371 10:18:30 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:00.371 10:18:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:00.371 { 00:26:00.371 "params": { 00:26:00.371 "name": "Nvme$subsystem", 00:26:00.371 "trtype": "$TEST_TRANSPORT", 00:26:00.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:00.371 "adrfam": "ipv4", 00:26:00.371 "trsvcid": "$NVMF_PORT", 00:26:00.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:00.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:00.371 "hdgst": ${hdgst:-false}, 00:26:00.371 "ddgst": ${ddgst:-false} 00:26:00.371 }, 00:26:00.371 "method": "bdev_nvme_attach_controller" 00:26:00.371 } 00:26:00.371 EOF 00:26:00.371 )") 00:26:00.371 10:18:30 -- common/autotest_common.sh@1324 -- # grep libasan 00:26:00.371 10:18:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:00.371 10:18:30 -- nvmf/common.sh@542 -- # cat 00:26:00.371 10:18:30 -- target/dif.sh@72 -- # (( file++ )) 00:26:00.371 10:18:30 -- target/dif.sh@72 -- # (( file <= files )) 00:26:00.371 10:18:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:00.371 10:18:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:00.371 { 00:26:00.371 "params": { 00:26:00.371 "name": "Nvme$subsystem", 00:26:00.371 "trtype": "$TEST_TRANSPORT", 00:26:00.371 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:00.371 "adrfam": "ipv4", 00:26:00.371 "trsvcid": "$NVMF_PORT", 00:26:00.371 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:00.371 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:00.371 "hdgst": ${hdgst:-false}, 00:26:00.371 "ddgst": ${ddgst:-false} 00:26:00.371 }, 00:26:00.371 "method": "bdev_nvme_attach_controller" 00:26:00.371 } 00:26:00.371 EOF 00:26:00.371 )") 00:26:00.371 10:18:30 -- nvmf/common.sh@542 -- # cat 00:26:00.629 10:18:30 -- nvmf/common.sh@544 -- # jq . 00:26:00.629 10:18:30 -- nvmf/common.sh@545 -- # IFS=, 00:26:00.629 10:18:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:00.629 "params": { 00:26:00.629 "name": "Nvme0", 00:26:00.629 "trtype": "tcp", 00:26:00.629 "traddr": "10.0.0.2", 00:26:00.629 "adrfam": "ipv4", 00:26:00.629 "trsvcid": "4420", 00:26:00.629 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:00.629 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:00.629 "hdgst": false, 00:26:00.629 "ddgst": false 00:26:00.629 }, 00:26:00.629 "method": "bdev_nvme_attach_controller" 00:26:00.629 },{ 00:26:00.629 "params": { 00:26:00.630 "name": "Nvme1", 00:26:00.630 "trtype": "tcp", 00:26:00.630 "traddr": "10.0.0.2", 00:26:00.630 "adrfam": "ipv4", 00:26:00.630 "trsvcid": "4420", 00:26:00.630 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:00.630 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:00.630 "hdgst": false, 00:26:00.630 "ddgst": false 00:26:00.630 }, 00:26:00.630 "method": "bdev_nvme_attach_controller" 00:26:00.630 }' 00:26:00.630 10:18:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:00.630 10:18:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:00.630 10:18:30 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:00.630 10:18:30 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:00.630 10:18:30 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:26:00.630 10:18:30 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:00.630 10:18:30 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:00.630 10:18:30 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:00.630 10:18:30 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:00.630 10:18:30 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:00.630 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:00.630 ... 00:26:00.630 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:00.630 ... 00:26:00.630 fio-3.35 00:26:00.630 Starting 4 threads 00:26:01.194 [2024-07-14 10:18:31.665436] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:01.194 [2024-07-14 10:18:31.665530] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:06.453 00:26:06.453 filename0: (groupid=0, jobs=1): err= 0: pid=101950: Sun Jul 14 10:18:36 2024 00:26:06.453 read: IOPS=2129, BW=16.6MiB/s (17.4MB/s)(83.2MiB/5001msec) 00:26:06.453 slat (nsec): min=7032, max=67115, avg=13998.90, stdev=5562.30 00:26:06.453 clat (usec): min=2181, max=5201, avg=3693.74, stdev=141.35 00:26:06.453 lat (usec): min=2205, max=5229, avg=3707.74, stdev=141.28 00:26:06.453 clat percentiles (usec): 00:26:06.453 | 1.00th=[ 3458], 5.00th=[ 3523], 10.00th=[ 3556], 20.00th=[ 3589], 00:26:06.453 | 30.00th=[ 3621], 40.00th=[ 3654], 50.00th=[ 3654], 60.00th=[ 3687], 00:26:06.454 | 70.00th=[ 3720], 80.00th=[ 3785], 90.00th=[ 3884], 95.00th=[ 3949], 00:26:06.454 | 99.00th=[ 4080], 99.50th=[ 4146], 99.90th=[ 4359], 99.95th=[ 4490], 00:26:06.454 | 99.99th=[ 4621] 00:26:06.454 bw ( KiB/s): min=16768, max=17408, per=25.04%, avg=17070.44, stdev=183.36, samples=9 00:26:06.454 iops : min= 2096, max= 2176, avg=2133.78, stdev=22.90, samples=9 00:26:06.454 lat (msec) : 4=97.28%, 10=2.72% 00:26:06.454 cpu : usr=94.66%, sys=4.22%, ctx=11, majf=0, minf=0 00:26:06.454 IO depths : 1=12.3%, 2=25.0%, 4=50.0%, 8=12.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:06.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.454 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.454 issued rwts: total=10648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.454 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:06.454 filename0: (groupid=0, jobs=1): err= 0: pid=101951: Sun Jul 14 10:18:36 2024 00:26:06.454 read: IOPS=2127, BW=16.6MiB/s (17.4MB/s)(83.1MiB/5001msec) 00:26:06.454 slat (nsec): min=6902, max=62481, avg=15480.70, stdev=5176.27 00:26:06.454 clat (usec): min=2042, max=6343, avg=3684.56, stdev=155.73 00:26:06.454 lat (usec): min=2051, max=6368, avg=3700.04, stdev=156.16 00:26:06.454 clat percentiles (usec): 00:26:06.454 | 1.00th=[ 3458], 5.00th=[ 3523], 10.00th=[ 3556], 20.00th=[ 3589], 00:26:06.454 | 30.00th=[ 3621], 40.00th=[ 3621], 50.00th=[ 3654], 60.00th=[ 3687], 00:26:06.454 | 70.00th=[ 3720], 80.00th=[ 3785], 90.00th=[ 3884], 95.00th=[ 3949], 00:26:06.454 | 99.00th=[ 4080], 99.50th=[ 4178], 99.90th=[ 4490], 99.95th=[ 6259], 00:26:06.454 | 99.99th=[ 6325] 00:26:06.454 bw ( KiB/s): min=16896, max=17280, per=25.01%, avg=17052.44, stdev=124.39, samples=9 00:26:06.454 iops : min= 2112, max= 2160, avg=2131.56, stdev=15.55, samples=9 00:26:06.454 lat (msec) : 4=97.46%, 10=2.54% 00:26:06.454 cpu : usr=93.22%, sys=5.62%, ctx=40, majf=0, minf=0 00:26:06.454 IO depths : 1=12.4%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:06.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.454 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.454 issued rwts: total=10640,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.454 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:06.454 filename1: (groupid=0, jobs=1): err= 0: pid=101952: Sun Jul 14 10:18:36 2024 00:26:06.454 read: IOPS=2129, BW=16.6MiB/s (17.4MB/s)(83.2MiB/5001msec) 00:26:06.454 slat (usec): min=6, max=165, avg=15.36, stdev= 5.38 00:26:06.454 clat (usec): min=1048, max=5933, avg=3681.39, stdev=167.27 00:26:06.454 lat (usec): min=1055, max=5946, avg=3696.75, stdev=167.87 00:26:06.454 clat percentiles (usec): 00:26:06.454 | 1.00th=[ 3458], 5.00th=[ 3523], 10.00th=[ 3556], 20.00th=[ 3589], 00:26:06.454 | 30.00th=[ 3621], 40.00th=[ 3621], 50.00th=[ 3654], 60.00th=[ 3687], 00:26:06.454 | 70.00th=[ 3720], 80.00th=[ 3785], 90.00th=[ 3884], 95.00th=[ 3949], 00:26:06.454 | 99.00th=[ 4113], 99.50th=[ 4178], 99.90th=[ 5473], 99.95th=[ 5473], 00:26:06.454 | 99.99th=[ 5735] 00:26:06.454 bw ( KiB/s): min=16896, max=17280, per=25.02%, avg=17056.22, stdev=123.94, samples=9 00:26:06.454 iops : min= 2112, max= 2160, avg=2132.00, stdev=15.49, samples=9 00:26:06.454 lat (msec) : 2=0.08%, 4=97.29%, 10=2.64% 00:26:06.454 cpu : usr=94.32%, sys=4.38%, ctx=59, majf=0, minf=0 00:26:06.454 IO depths : 1=12.1%, 2=25.0%, 4=50.0%, 8=12.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:06.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.454 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.454 issued rwts: total=10648,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.454 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:06.454 filename1: (groupid=0, jobs=1): err= 0: pid=101953: Sun Jul 14 10:18:36 2024 00:26:06.454 read: IOPS=2138, BW=16.7MiB/s (17.5MB/s)(83.6MiB/5003msec) 00:26:06.454 slat (nsec): min=6793, max=51758, avg=8753.70, stdev=3713.84 00:26:06.454 clat (usec): min=1089, max=4330, avg=3699.07, stdev=194.24 00:26:06.454 lat (usec): min=1097, max=4340, avg=3707.82, stdev=194.34 00:26:06.454 clat percentiles (usec): 00:26:06.454 | 1.00th=[ 3130], 5.00th=[ 3556], 10.00th=[ 3589], 20.00th=[ 3621], 00:26:06.454 | 30.00th=[ 3654], 40.00th=[ 3654], 50.00th=[ 3687], 60.00th=[ 3720], 00:26:06.454 | 70.00th=[ 3752], 80.00th=[ 3785], 90.00th=[ 3884], 95.00th=[ 3949], 00:26:06.454 | 99.00th=[ 4080], 99.50th=[ 4146], 99.90th=[ 4293], 99.95th=[ 4293], 00:26:06.454 | 99.99th=[ 4293] 00:26:06.454 bw ( KiB/s): min=16992, max=17424, per=25.16%, avg=17155.56, stdev=141.71, samples=9 00:26:06.454 iops : min= 2124, max= 2178, avg=2144.44, stdev=17.71, samples=9 00:26:06.454 lat (msec) : 2=0.32%, 4=96.89%, 10=2.79% 00:26:06.454 cpu : usr=93.88%, sys=4.96%, ctx=6, majf=0, minf=0 00:26:06.454 IO depths : 1=9.3%, 2=22.1%, 4=52.7%, 8=15.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:06.454 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.454 complete : 0=0.0%, 4=89.4%, 8=10.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:06.454 issued rwts: total=10698,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:06.454 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:06.454 00:26:06.454 Run status group 0 (all jobs): 00:26:06.454 READ: bw=66.6MiB/s (69.8MB/s), 16.6MiB/s-16.7MiB/s (17.4MB/s-17.5MB/s), io=333MiB (349MB), run=5001-5003msec 00:26:06.712 10:18:37 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:06.712 10:18:37 -- target/dif.sh@43 -- # local sub 00:26:06.712 10:18:37 -- target/dif.sh@45 -- # for sub in "$@" 00:26:06.712 10:18:37 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:06.712 10:18:37 -- target/dif.sh@36 -- # local sub_id=0 00:26:06.713 10:18:37 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:06.713 10:18:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:06.713 10:18:37 -- common/autotest_common.sh@10 -- # set +x 00:26:06.713 10:18:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:06.713 10:18:37 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:06.713 10:18:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:06.713 10:18:37 -- common/autotest_common.sh@10 -- # set +x 00:26:06.713 10:18:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:06.713 10:18:37 -- target/dif.sh@45 -- # for sub in "$@" 00:26:06.713 10:18:37 -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:06.713 10:18:37 -- target/dif.sh@36 -- # local sub_id=1 00:26:06.713 10:18:37 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:06.713 10:18:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:06.713 10:18:37 -- common/autotest_common.sh@10 -- # set +x 00:26:06.713 10:18:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:06.713 10:18:37 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:06.713 10:18:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:06.713 10:18:37 -- common/autotest_common.sh@10 -- # set +x 00:26:06.713 10:18:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:06.713 00:26:06.713 real 0m23.724s 00:26:06.713 user 2m6.795s 00:26:06.713 sys 0m4.702s 00:26:06.713 10:18:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:06.713 10:18:37 -- common/autotest_common.sh@10 -- # set +x 00:26:06.713 ************************************ 00:26:06.713 END TEST fio_dif_rand_params 00:26:06.713 ************************************ 00:26:06.713 10:18:37 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:06.713 10:18:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:06.713 10:18:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:06.713 10:18:37 -- common/autotest_common.sh@10 -- # set +x 00:26:06.713 ************************************ 00:26:06.713 START TEST fio_dif_digest 00:26:06.713 ************************************ 00:26:06.713 10:18:37 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:26:06.713 10:18:37 -- target/dif.sh@123 -- # local NULL_DIF 00:26:06.713 10:18:37 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:06.713 10:18:37 -- target/dif.sh@125 -- # local hdgst ddgst 00:26:06.713 10:18:37 -- target/dif.sh@127 -- # NULL_DIF=3 00:26:06.713 10:18:37 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:06.713 10:18:37 -- target/dif.sh@127 -- # numjobs=3 00:26:06.713 10:18:37 -- target/dif.sh@127 -- # iodepth=3 00:26:06.713 10:18:37 -- target/dif.sh@127 -- # runtime=10 00:26:06.713 10:18:37 -- target/dif.sh@128 -- # hdgst=true 00:26:06.713 10:18:37 -- target/dif.sh@128 -- # ddgst=true 00:26:06.713 10:18:37 -- target/dif.sh@130 -- # create_subsystems 0 00:26:06.713 10:18:37 -- target/dif.sh@28 -- # local sub 00:26:06.713 10:18:37 -- target/dif.sh@30 -- # for sub in "$@" 00:26:06.713 10:18:37 -- target/dif.sh@31 -- # create_subsystem 0 00:26:06.713 10:18:37 -- target/dif.sh@18 -- # local sub_id=0 00:26:06.713 10:18:37 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:06.713 10:18:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:06.713 10:18:37 -- common/autotest_common.sh@10 -- # set +x 00:26:06.713 bdev_null0 00:26:06.713 10:18:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:06.713 10:18:37 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:06.713 10:18:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:06.713 10:18:37 -- common/autotest_common.sh@10 -- # set +x 00:26:06.713 10:18:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:06.713 10:18:37 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:06.713 10:18:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:06.713 10:18:37 -- common/autotest_common.sh@10 -- # set +x 00:26:06.713 10:18:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:06.713 10:18:37 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:06.713 10:18:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:06.713 10:18:37 -- common/autotest_common.sh@10 -- # set +x 00:26:06.713 [2024-07-14 10:18:37.194241] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:06.713 10:18:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:06.713 10:18:37 -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:06.713 10:18:37 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:06.713 10:18:37 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:06.713 10:18:37 -- nvmf/common.sh@520 -- # config=() 00:26:06.713 10:18:37 -- nvmf/common.sh@520 -- # local subsystem config 00:26:06.713 10:18:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:26:06.713 10:18:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:26:06.713 { 00:26:06.713 "params": { 00:26:06.713 "name": "Nvme$subsystem", 00:26:06.713 "trtype": "$TEST_TRANSPORT", 00:26:06.713 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:06.713 "adrfam": "ipv4", 00:26:06.713 "trsvcid": "$NVMF_PORT", 00:26:06.713 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:06.713 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:06.713 "hdgst": ${hdgst:-false}, 00:26:06.713 "ddgst": ${ddgst:-false} 00:26:06.713 }, 00:26:06.713 "method": "bdev_nvme_attach_controller" 00:26:06.713 } 00:26:06.713 EOF 00:26:06.713 )") 00:26:06.713 10:18:37 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:06.713 10:18:37 -- target/dif.sh@82 -- # gen_fio_conf 00:26:06.713 10:18:37 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:06.713 10:18:37 -- target/dif.sh@54 -- # local file 00:26:06.713 10:18:37 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:26:06.713 10:18:37 -- target/dif.sh@56 -- # cat 00:26:06.713 10:18:37 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:06.713 10:18:37 -- common/autotest_common.sh@1318 -- # local sanitizers 00:26:06.713 10:18:37 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:06.713 10:18:37 -- common/autotest_common.sh@1320 -- # shift 00:26:06.713 10:18:37 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:26:06.713 10:18:37 -- nvmf/common.sh@542 -- # cat 00:26:06.713 10:18:37 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:06.713 10:18:37 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:06.713 10:18:37 -- common/autotest_common.sh@1324 -- # grep libasan 00:26:06.713 10:18:37 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:06.713 10:18:37 -- target/dif.sh@72 -- # (( file = 1 )) 00:26:06.713 10:18:37 -- target/dif.sh@72 -- # (( file <= files )) 00:26:06.713 10:18:37 -- nvmf/common.sh@544 -- # jq . 00:26:06.713 10:18:37 -- nvmf/common.sh@545 -- # IFS=, 00:26:06.713 10:18:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:26:06.713 "params": { 00:26:06.713 "name": "Nvme0", 00:26:06.713 "trtype": "tcp", 00:26:06.713 "traddr": "10.0.0.2", 00:26:06.713 "adrfam": "ipv4", 00:26:06.713 "trsvcid": "4420", 00:26:06.713 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:06.713 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:06.713 "hdgst": true, 00:26:06.713 "ddgst": true 00:26:06.713 }, 00:26:06.713 "method": "bdev_nvme_attach_controller" 00:26:06.713 }' 00:26:06.713 10:18:37 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:06.713 10:18:37 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:06.713 10:18:37 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:26:06.713 10:18:37 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:26:06.713 10:18:37 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:26:06.713 10:18:37 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:26:06.971 10:18:37 -- common/autotest_common.sh@1324 -- # asan_lib= 00:26:06.971 10:18:37 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:26:06.971 10:18:37 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:26:06.971 10:18:37 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:06.971 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:06.971 ... 00:26:06.972 fio-3.35 00:26:06.972 Starting 3 threads 00:26:07.230 [2024-07-14 10:18:37.761524] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:26:07.230 [2024-07-14 10:18:37.761616] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:26:19.452 00:26:19.452 filename0: (groupid=0, jobs=1): err= 0: pid=102059: Sun Jul 14 10:18:47 2024 00:26:19.452 read: IOPS=194, BW=24.3MiB/s (25.5MB/s)(244MiB/10045msec) 00:26:19.452 slat (nsec): min=6605, max=42861, avg=10324.27, stdev=3888.55 00:26:19.452 clat (usec): min=9315, max=57175, avg=15404.02, stdev=1564.81 00:26:19.452 lat (usec): min=9322, max=57184, avg=15414.35, stdev=1565.04 00:26:19.452 clat percentiles (usec): 00:26:19.452 | 1.00th=[10159], 5.00th=[14091], 10.00th=[14353], 20.00th=[14746], 00:26:19.452 | 30.00th=[15008], 40.00th=[15139], 50.00th=[15401], 60.00th=[15533], 00:26:19.452 | 70.00th=[15795], 80.00th=[16057], 90.00th=[16450], 95.00th=[16909], 00:26:19.452 | 99.00th=[17695], 99.50th=[18220], 99.90th=[46400], 99.95th=[57410], 00:26:19.452 | 99.99th=[57410] 00:26:19.452 bw ( KiB/s): min=23808, max=26112, per=27.39%, avg=24942.10, stdev=604.10, samples=20 00:26:19.452 iops : min= 186, max= 204, avg=194.80, stdev= 4.77, samples=20 00:26:19.452 lat (msec) : 10=0.72%, 20=99.18%, 50=0.05%, 100=0.05% 00:26:19.452 cpu : usr=93.26%, sys=5.63%, ctx=90, majf=0, minf=9 00:26:19.452 IO depths : 1=17.4%, 2=82.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:19.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.452 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.452 issued rwts: total=1951,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.452 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:19.452 filename0: (groupid=0, jobs=1): err= 0: pid=102060: Sun Jul 14 10:18:47 2024 00:26:19.452 read: IOPS=276, BW=34.6MiB/s (36.3MB/s)(346MiB/10006msec) 00:26:19.452 slat (nsec): min=6704, max=66822, avg=11276.88, stdev=3612.70 00:26:19.452 clat (usec): min=8143, max=52796, avg=10818.34, stdev=2072.89 00:26:19.452 lat (usec): min=8153, max=52807, avg=10829.61, stdev=2072.90 00:26:19.453 clat percentiles (usec): 00:26:19.453 | 1.00th=[ 8979], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10028], 00:26:19.453 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:26:19.453 | 70.00th=[11076], 80.00th=[11338], 90.00th=[11731], 95.00th=[11994], 00:26:19.453 | 99.00th=[12780], 99.50th=[13435], 99.90th=[52167], 99.95th=[52167], 00:26:19.453 | 99.99th=[52691] 00:26:19.453 bw ( KiB/s): min=32000, max=36864, per=39.16%, avg=35653.63, stdev=1155.77, samples=19 00:26:19.453 iops : min= 250, max= 288, avg=278.47, stdev= 9.05, samples=19 00:26:19.453 lat (msec) : 10=17.61%, 20=82.17%, 100=0.22% 00:26:19.453 cpu : usr=93.13%, sys=5.56%, ctx=24, majf=0, minf=0 00:26:19.453 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:19.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.453 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.453 issued rwts: total=2771,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.453 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:19.453 filename0: (groupid=0, jobs=1): err= 0: pid=102061: Sun Jul 14 10:18:47 2024 00:26:19.453 read: IOPS=242, BW=30.3MiB/s (31.7MB/s)(303MiB/10005msec) 00:26:19.453 slat (nsec): min=6608, max=48660, avg=11038.15, stdev=3818.76 00:26:19.453 clat (usec): min=6381, max=18597, avg=12371.40, stdev=1131.62 00:26:19.453 lat (usec): min=6392, max=18622, avg=12382.44, stdev=1131.82 00:26:19.453 clat percentiles (usec): 00:26:19.453 | 1.00th=[ 8291], 5.00th=[10683], 10.00th=[11076], 20.00th=[11600], 00:26:19.453 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12387], 60.00th=[12649], 00:26:19.453 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13698], 95.00th=[14091], 00:26:19.453 | 99.00th=[15008], 99.50th=[15401], 99.90th=[18482], 99.95th=[18482], 00:26:19.453 | 99.99th=[18482] 00:26:19.453 bw ( KiB/s): min=28160, max=32256, per=33.99%, avg=30945.58, stdev=923.39, samples=19 00:26:19.453 iops : min= 220, max= 252, avg=241.74, stdev= 7.24, samples=19 00:26:19.453 lat (msec) : 10=1.61%, 20=98.39% 00:26:19.453 cpu : usr=93.05%, sys=5.72%, ctx=9, majf=0, minf=9 00:26:19.453 IO depths : 1=2.4%, 2=97.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:19.453 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.453 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:19.453 issued rwts: total=2423,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:19.453 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:19.453 00:26:19.453 Run status group 0 (all jobs): 00:26:19.453 READ: bw=88.9MiB/s (93.2MB/s), 24.3MiB/s-34.6MiB/s (25.5MB/s-36.3MB/s), io=893MiB (937MB), run=10005-10045msec 00:26:19.453 10:18:48 -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:19.453 10:18:48 -- target/dif.sh@43 -- # local sub 00:26:19.453 10:18:48 -- target/dif.sh@45 -- # for sub in "$@" 00:26:19.453 10:18:48 -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:19.453 10:18:48 -- target/dif.sh@36 -- # local sub_id=0 00:26:19.453 10:18:48 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:19.453 10:18:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:19.453 10:18:48 -- common/autotest_common.sh@10 -- # set +x 00:26:19.453 10:18:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:19.453 10:18:48 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:19.453 10:18:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:19.453 10:18:48 -- common/autotest_common.sh@10 -- # set +x 00:26:19.453 10:18:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:19.453 00:26:19.453 real 0m10.991s 00:26:19.453 user 0m28.628s 00:26:19.453 sys 0m1.939s 00:26:19.453 10:18:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:19.453 ************************************ 00:26:19.453 END TEST fio_dif_digest 00:26:19.453 ************************************ 00:26:19.453 10:18:48 -- common/autotest_common.sh@10 -- # set +x 00:26:19.453 10:18:48 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:19.453 10:18:48 -- target/dif.sh@147 -- # nvmftestfini 00:26:19.453 10:18:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:19.453 10:18:48 -- nvmf/common.sh@116 -- # sync 00:26:19.453 10:18:48 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:19.453 10:18:48 -- nvmf/common.sh@119 -- # set +e 00:26:19.453 10:18:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:19.453 10:18:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:19.453 rmmod nvme_tcp 00:26:19.453 rmmod nvme_fabrics 00:26:19.453 rmmod nvme_keyring 00:26:19.453 10:18:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:19.453 10:18:48 -- nvmf/common.sh@123 -- # set -e 00:26:19.453 10:18:48 -- nvmf/common.sh@124 -- # return 0 00:26:19.453 10:18:48 -- nvmf/common.sh@477 -- # '[' -n 101292 ']' 00:26:19.453 10:18:48 -- nvmf/common.sh@478 -- # killprocess 101292 00:26:19.453 10:18:48 -- common/autotest_common.sh@926 -- # '[' -z 101292 ']' 00:26:19.453 10:18:48 -- common/autotest_common.sh@930 -- # kill -0 101292 00:26:19.453 10:18:48 -- common/autotest_common.sh@931 -- # uname 00:26:19.453 10:18:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:19.453 10:18:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 101292 00:26:19.453 10:18:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:19.453 10:18:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:19.453 killing process with pid 101292 00:26:19.453 10:18:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 101292' 00:26:19.453 10:18:48 -- common/autotest_common.sh@945 -- # kill 101292 00:26:19.453 10:18:48 -- common/autotest_common.sh@950 -- # wait 101292 00:26:19.453 10:18:48 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:19.453 10:18:48 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:19.453 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:19.453 Waiting for block devices as requested 00:26:19.453 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:19.453 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:19.453 10:18:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:19.453 10:18:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:19.453 10:18:49 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:19.453 10:18:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:19.453 10:18:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.453 10:18:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:19.453 10:18:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:19.453 10:18:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:19.453 00:26:19.453 real 0m59.913s 00:26:19.453 user 3m53.407s 00:26:19.453 sys 0m13.940s 00:26:19.453 10:18:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:19.453 10:18:49 -- common/autotest_common.sh@10 -- # set +x 00:26:19.453 ************************************ 00:26:19.453 END TEST nvmf_dif 00:26:19.453 ************************************ 00:26:19.453 10:18:49 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:19.453 10:18:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:19.453 10:18:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:19.453 10:18:49 -- common/autotest_common.sh@10 -- # set +x 00:26:19.453 ************************************ 00:26:19.453 START TEST nvmf_abort_qd_sizes 00:26:19.453 ************************************ 00:26:19.453 10:18:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:19.453 * Looking for test storage... 00:26:19.453 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:26:19.453 10:18:49 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:19.453 10:18:49 -- nvmf/common.sh@7 -- # uname -s 00:26:19.453 10:18:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:19.453 10:18:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:19.453 10:18:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:19.453 10:18:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:19.453 10:18:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:19.453 10:18:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:19.453 10:18:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:19.453 10:18:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:19.453 10:18:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:19.453 10:18:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:19.453 10:18:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:26:19.453 10:18:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=62b509c9-30ed-43a8-a92a-70e3bfe74b31 00:26:19.453 10:18:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:19.453 10:18:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:19.453 10:18:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:19.453 10:18:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:19.453 10:18:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:19.453 10:18:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:19.453 10:18:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:19.453 10:18:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.453 10:18:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.453 10:18:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.453 10:18:49 -- paths/export.sh@5 -- # export PATH 00:26:19.453 10:18:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.453 10:18:49 -- nvmf/common.sh@46 -- # : 0 00:26:19.453 10:18:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:26:19.453 10:18:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:26:19.453 10:18:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:26:19.453 10:18:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:19.453 10:18:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:19.453 10:18:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:26:19.453 10:18:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:26:19.454 10:18:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:26:19.454 10:18:49 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:26:19.454 10:18:49 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:26:19.454 10:18:49 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:19.454 10:18:49 -- nvmf/common.sh@436 -- # prepare_net_devs 00:26:19.454 10:18:49 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:26:19.454 10:18:49 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:26:19.454 10:18:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:19.454 10:18:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:19.454 10:18:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:19.454 10:18:49 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:26:19.454 10:18:49 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:26:19.454 10:18:49 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:26:19.454 10:18:49 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:26:19.454 10:18:49 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:26:19.454 10:18:49 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:26:19.454 10:18:49 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:19.454 10:18:49 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:19.454 10:18:49 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:19.454 10:18:49 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:26:19.454 10:18:49 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:19.454 10:18:49 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:19.454 10:18:49 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:19.454 10:18:49 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:19.454 10:18:49 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:19.454 10:18:49 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:19.454 10:18:49 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:19.454 10:18:49 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:19.454 10:18:49 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:26:19.454 10:18:49 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:26:19.454 Cannot find device "nvmf_tgt_br" 00:26:19.454 10:18:49 -- nvmf/common.sh@154 -- # true 00:26:19.454 10:18:49 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:26:19.454 Cannot find device "nvmf_tgt_br2" 00:26:19.454 10:18:49 -- nvmf/common.sh@155 -- # true 00:26:19.454 10:18:49 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:26:19.454 10:18:49 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:26:19.454 Cannot find device "nvmf_tgt_br" 00:26:19.454 10:18:49 -- nvmf/common.sh@157 -- # true 00:26:19.454 10:18:49 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:26:19.454 Cannot find device "nvmf_tgt_br2" 00:26:19.454 10:18:49 -- nvmf/common.sh@158 -- # true 00:26:19.454 10:18:49 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:26:19.454 10:18:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:26:19.454 10:18:49 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:19.454 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:19.454 10:18:49 -- nvmf/common.sh@161 -- # true 00:26:19.454 10:18:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:19.454 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:19.454 10:18:49 -- nvmf/common.sh@162 -- # true 00:26:19.454 10:18:49 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:26:19.454 10:18:49 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:19.454 10:18:49 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:19.454 10:18:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:19.454 10:18:49 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:19.454 10:18:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:19.454 10:18:49 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:19.454 10:18:49 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:19.454 10:18:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:19.454 10:18:49 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:26:19.454 10:18:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:26:19.454 10:18:49 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:26:19.454 10:18:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:26:19.454 10:18:49 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:19.454 10:18:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:19.454 10:18:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:19.454 10:18:49 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:26:19.454 10:18:49 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:26:19.454 10:18:49 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:26:19.454 10:18:49 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:19.454 10:18:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:19.454 10:18:49 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:19.454 10:18:49 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:19.454 10:18:49 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:26:19.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:19.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:26:19.454 00:26:19.454 --- 10.0.0.2 ping statistics --- 00:26:19.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.454 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:26:19.454 10:18:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:26:19.454 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:19.454 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.072 ms 00:26:19.454 00:26:19.454 --- 10.0.0.3 ping statistics --- 00:26:19.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.454 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:26:19.454 10:18:49 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:19.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:19.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:26:19.454 00:26:19.454 --- 10.0.0.1 ping statistics --- 00:26:19.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:19.454 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:26:19.454 10:18:49 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:19.454 10:18:49 -- nvmf/common.sh@421 -- # return 0 00:26:19.454 10:18:49 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:26:19.454 10:18:49 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:26:19.712 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:19.712 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:26:19.972 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:26:19.972 10:18:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:19.972 10:18:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:26:19.972 10:18:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:26:19.972 10:18:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:19.972 10:18:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:26:19.972 10:18:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:26:19.972 10:18:50 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:26:19.972 10:18:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:26:19.972 10:18:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:19.972 10:18:50 -- common/autotest_common.sh@10 -- # set +x 00:26:19.972 10:18:50 -- nvmf/common.sh@469 -- # nvmfpid=102650 00:26:19.972 10:18:50 -- nvmf/common.sh@470 -- # waitforlisten 102650 00:26:19.972 10:18:50 -- common/autotest_common.sh@819 -- # '[' -z 102650 ']' 00:26:19.972 10:18:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:19.972 10:18:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:19.972 10:18:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:26:19.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:19.972 10:18:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:19.972 10:18:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:26:19.972 10:18:50 -- common/autotest_common.sh@10 -- # set +x 00:26:19.972 [2024-07-14 10:18:50.442691] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 23.11.0 initialization... 00:26:19.972 [2024-07-14 10:18:50.442795] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:20.231 [2024-07-14 10:18:50.589248] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:20.231 [2024-07-14 10:18:50.667948] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:26:20.231 [2024-07-14 10:18:50.668129] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:20.231 [2024-07-14 10:18:50.668146] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:20.231 [2024-07-14 10:18:50.668158] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:20.231 [2024-07-14 10:18:50.668655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.231 [2024-07-14 10:18:50.668799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:20.231 [2024-07-14 10:18:50.669084] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:20.231 [2024-07-14 10:18:50.669117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:21.167 10:18:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:26:21.167 10:18:51 -- common/autotest_common.sh@852 -- # return 0 00:26:21.167 10:18:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:26:21.167 10:18:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:21.167 10:18:51 -- common/autotest_common.sh@10 -- # set +x 00:26:21.167 10:18:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:21.167 10:18:51 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:21.167 10:18:51 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:26:21.167 10:18:51 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:26:21.167 10:18:51 -- scripts/common.sh@311 -- # local bdf bdfs 00:26:21.167 10:18:51 -- scripts/common.sh@312 -- # local nvmes 00:26:21.167 10:18:51 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:26:21.167 10:18:51 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:26:21.167 10:18:51 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:26:21.167 10:18:51 -- scripts/common.sh@297 -- # local bdf= 00:26:21.167 10:18:51 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:26:21.167 10:18:51 -- scripts/common.sh@232 -- # local class 00:26:21.167 10:18:51 -- scripts/common.sh@233 -- # local subclass 00:26:21.167 10:18:51 -- scripts/common.sh@234 -- # local progif 00:26:21.167 10:18:51 -- scripts/common.sh@235 -- # printf %02x 1 00:26:21.167 10:18:51 -- scripts/common.sh@235 -- # class=01 00:26:21.167 10:18:51 -- scripts/common.sh@236 -- # printf %02x 8 00:26:21.167 10:18:51 -- scripts/common.sh@236 -- # subclass=08 00:26:21.167 10:18:51 -- scripts/common.sh@237 -- # printf %02x 2 00:26:21.167 10:18:51 -- scripts/common.sh@237 -- # progif=02 00:26:21.167 10:18:51 -- scripts/common.sh@239 -- # hash lspci 00:26:21.167 10:18:51 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:26:21.167 10:18:51 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:26:21.167 10:18:51 -- scripts/common.sh@242 -- # grep -i -- -p02 00:26:21.167 10:18:51 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:26:21.167 10:18:51 -- scripts/common.sh@244 -- # tr -d '"' 00:26:21.167 10:18:51 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:21.167 10:18:51 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:26:21.167 10:18:51 -- scripts/common.sh@15 -- # local i 00:26:21.167 10:18:51 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:26:21.167 10:18:51 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:21.167 10:18:51 -- scripts/common.sh@24 -- # return 0 00:26:21.167 10:18:51 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:26:21.167 10:18:51 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:26:21.167 10:18:51 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:26:21.167 10:18:51 -- scripts/common.sh@15 -- # local i 00:26:21.167 10:18:51 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:26:21.167 10:18:51 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:26:21.167 10:18:51 -- scripts/common.sh@24 -- # return 0 00:26:21.167 10:18:51 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:26:21.167 10:18:51 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:21.167 10:18:51 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:26:21.167 10:18:51 -- scripts/common.sh@322 -- # uname -s 00:26:21.167 10:18:51 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:21.167 10:18:51 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:21.167 10:18:51 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:26:21.167 10:18:51 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:26:21.167 10:18:51 -- scripts/common.sh@322 -- # uname -s 00:26:21.167 10:18:51 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:26:21.168 10:18:51 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:26:21.168 10:18:51 -- scripts/common.sh@327 -- # (( 2 )) 00:26:21.168 10:18:51 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:26:21.168 10:18:51 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:26:21.168 10:18:51 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:26:21.168 10:18:51 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:26:21.168 10:18:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:21.168 10:18:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:21.168 10:18:51 -- common/autotest_common.sh@10 -- # set +x 00:26:21.168 ************************************ 00:26:21.168 START TEST spdk_target_abort 00:26:21.168 ************************************ 00:26:21.168 10:18:51 -- common/autotest_common.sh@1104 -- # spdk_target 00:26:21.168 10:18:51 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:21.168 10:18:51 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:21.168 10:18:51 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:26:21.168 10:18:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:21.168 10:18:51 -- common/autotest_common.sh@10 -- # set +x 00:26:21.168 spdk_targetn1 00:26:21.168 10:18:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:21.168 10:18:51 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:21.168 10:18:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:21.168 10:18:51 -- common/autotest_common.sh@10 -- # set +x 00:26:21.168 [2024-07-14 10:18:51.596328] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:21.168 10:18:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:21.168 10:18:51 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:26:21.168 10:18:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:21.168 10:18:51 -- common/autotest_common.sh@10 -- # set +x 00:26:21.168 10:18:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:21.168 10:18:51 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:26:21.168 10:18:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:21.168 10:18:51 -- common/autotest_common.sh@10 -- # set +x 00:26:21.168 10:18:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:21.168 10:18:51 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:26:21.168 10:18:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:21.168 10:18:51 -- common/autotest_common.sh@10 -- # set +x 00:26:21.168 [2024-07-14 10:18:51.624449] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:21.168 10:18:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:21.168 10:18:51 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:26:21.168 10:18:51 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:21.168 10:18:51 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:21.168 10:18:51 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:21.168 10:18:51 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:21.168 10:18:51 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:26:21.168 10:18:51 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:21.168 10:18:51 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:21.168 10:18:51 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:21.168 10:18:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:21.168 10:18:51 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:21.168 10:18:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:21.168 10:18:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:21.168 10:18:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:21.168 10:18:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:21.168 10:18:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:21.168 10:18:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:21.168 10:18:51 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:21.168 10:18:51 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:21.168 10:18:51 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:21.168 10:18:51 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:24.453 Initializing NVMe Controllers 00:26:24.453 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:24.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:24.453 Initialization complete. Launching workers. 00:26:24.453 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 10255, failed: 0 00:26:24.453 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1064, failed to submit 9191 00:26:24.453 success 772, unsuccess 292, failed 0 00:26:24.453 10:18:54 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:24.453 10:18:54 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:27.739 [2024-07-14 10:18:58.111248] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d6ff0 is same with the state(5) to be set 00:26:27.739 [2024-07-14 10:18:58.111312] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d6ff0 is same with the state(5) to be set 00:26:27.739 [2024-07-14 10:18:58.111340] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d6ff0 is same with the state(5) to be set 00:26:27.739 [2024-07-14 10:18:58.111349] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d6ff0 is same with the state(5) to be set 00:26:27.739 [2024-07-14 10:18:58.111357] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d6ff0 is same with the state(5) to be set 00:26:27.739 [2024-07-14 10:18:58.111365] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d6ff0 is same with the state(5) to be set 00:26:27.739 [2024-07-14 10:18:58.111373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d6ff0 is same with the state(5) to be set 00:26:27.739 [2024-07-14 10:18:58.111381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13d6ff0 is same with the state(5) to be set 00:26:27.739 Initializing NVMe Controllers 00:26:27.739 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:27.739 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:27.739 Initialization complete. Launching workers. 00:26:27.739 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 6023, failed: 0 00:26:27.739 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1242, failed to submit 4781 00:26:27.739 success 263, unsuccess 979, failed 0 00:26:27.739 10:18:58 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:27.739 10:18:58 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:26:31.023 Initializing NVMe Controllers 00:26:31.023 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:26:31.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:26:31.023 Initialization complete. Launching workers. 00:26:31.023 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 30605, failed: 0 00:26:31.023 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2658, failed to submit 27947 00:26:31.023 success 506, unsuccess 2152, failed 0 00:26:31.023 10:19:01 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:26:31.023 10:19:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:31.023 10:19:01 -- common/autotest_common.sh@10 -- # set +x 00:26:31.023 10:19:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:31.023 10:19:01 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:26:31.023 10:19:01 -- common/autotest_common.sh@551 -- # xtrace_disable 00:26:31.023 10:19:01 -- common/autotest_common.sh@10 -- # set +x 00:26:31.281 10:19:01 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:26:31.281 10:19:01 -- target/abort_qd_sizes.sh@62 -- # killprocess 102650 00:26:31.281 10:19:01 -- common/autotest_common.sh@926 -- # '[' -z 102650 ']' 00:26:31.281 10:19:01 -- common/autotest_common.sh@930 -- # kill -0 102650 00:26:31.281 10:19:01 -- common/autotest_common.sh@931 -- # uname 00:26:31.281 10:19:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:26:31.281 10:19:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 102650 00:26:31.281 killing process with pid 102650 00:26:31.281 10:19:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:26:31.282 10:19:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:26:31.282 10:19:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 102650' 00:26:31.282 10:19:01 -- common/autotest_common.sh@945 -- # kill 102650 00:26:31.282 10:19:01 -- common/autotest_common.sh@950 -- # wait 102650 00:26:31.540 00:26:31.540 real 0m10.520s 00:26:31.540 user 0m43.257s 00:26:31.540 sys 0m1.642s 00:26:31.540 10:19:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:31.540 10:19:02 -- common/autotest_common.sh@10 -- # set +x 00:26:31.540 ************************************ 00:26:31.540 END TEST spdk_target_abort 00:26:31.540 ************************************ 00:26:31.540 10:19:02 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:26:31.540 10:19:02 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:26:31.540 10:19:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:26:31.540 10:19:02 -- common/autotest_common.sh@10 -- # set +x 00:26:31.799 ************************************ 00:26:31.799 START TEST kernel_target_abort 00:26:31.799 ************************************ 00:26:31.799 10:19:02 -- common/autotest_common.sh@1104 -- # kernel_target 00:26:31.799 10:19:02 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:26:31.799 10:19:02 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:26:31.799 10:19:02 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:26:31.799 10:19:02 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:26:31.799 10:19:02 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:26:31.799 10:19:02 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:31.799 10:19:02 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:31.799 10:19:02 -- nvmf/common.sh@627 -- # local block nvme 00:26:31.799 10:19:02 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:26:31.799 10:19:02 -- nvmf/common.sh@630 -- # modprobe nvmet 00:26:31.799 10:19:02 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:31.799 10:19:02 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:32.058 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:32.058 Waiting for block devices as requested 00:26:32.058 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:26:32.058 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:26:32.317 10:19:02 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:32.317 10:19:02 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:32.317 10:19:02 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:26:32.317 10:19:02 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:26:32.317 10:19:02 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:26:32.317 No valid GPT data, bailing 00:26:32.317 10:19:02 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:32.317 10:19:02 -- scripts/common.sh@393 -- # pt= 00:26:32.317 10:19:02 -- scripts/common.sh@394 -- # return 1 00:26:32.317 10:19:02 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:26:32.317 10:19:02 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:32.317 10:19:02 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:26:32.317 10:19:02 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:26:32.317 10:19:02 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:26:32.317 10:19:02 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:26:32.317 No valid GPT data, bailing 00:26:32.317 10:19:02 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:26:32.317 10:19:02 -- scripts/common.sh@393 -- # pt= 00:26:32.317 10:19:02 -- scripts/common.sh@394 -- # return 1 00:26:32.317 10:19:02 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:26:32.317 10:19:02 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:32.317 10:19:02 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:26:32.317 10:19:02 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:26:32.317 10:19:02 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:26:32.317 10:19:02 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:26:32.317 No valid GPT data, bailing 00:26:32.575 10:19:02 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:26:32.575 10:19:02 -- scripts/common.sh@393 -- # pt= 00:26:32.575 10:19:02 -- scripts/common.sh@394 -- # return 1 00:26:32.575 10:19:02 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:26:32.575 10:19:02 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:26:32.575 10:19:02 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:26:32.575 10:19:02 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:26:32.575 10:19:02 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:26:32.575 10:19:02 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:26:32.576 No valid GPT data, bailing 00:26:32.576 10:19:02 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:26:32.576 10:19:02 -- scripts/common.sh@393 -- # pt= 00:26:32.576 10:19:02 -- scripts/common.sh@394 -- # return 1 00:26:32.576 10:19:02 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:26:32.576 10:19:02 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:26:32.576 10:19:02 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:26:32.576 10:19:02 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:32.576 10:19:02 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:32.576 10:19:02 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:26:32.576 10:19:02 -- nvmf/common.sh@654 -- # echo 1 00:26:32.576 10:19:02 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:26:32.576 10:19:02 -- nvmf/common.sh@656 -- # echo 1 00:26:32.576 10:19:02 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:26:32.576 10:19:02 -- nvmf/common.sh@663 -- # echo tcp 00:26:32.576 10:19:02 -- nvmf/common.sh@664 -- # echo 4420 00:26:32.576 10:19:02 -- nvmf/common.sh@665 -- # echo ipv4 00:26:32.576 10:19:02 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:32.576 10:19:02 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:62b509c9-30ed-43a8-a92a-70e3bfe74b31 --hostid=62b509c9-30ed-43a8-a92a-70e3bfe74b31 -a 10.0.0.1 -t tcp -s 4420 00:26:32.576 00:26:32.576 Discovery Log Number of Records 2, Generation counter 2 00:26:32.576 =====Discovery Log Entry 0====== 00:26:32.576 trtype: tcp 00:26:32.576 adrfam: ipv4 00:26:32.576 subtype: current discovery subsystem 00:26:32.576 treq: not specified, sq flow control disable supported 00:26:32.576 portid: 1 00:26:32.576 trsvcid: 4420 00:26:32.576 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:32.576 traddr: 10.0.0.1 00:26:32.576 eflags: none 00:26:32.576 sectype: none 00:26:32.576 =====Discovery Log Entry 1====== 00:26:32.576 trtype: tcp 00:26:32.576 adrfam: ipv4 00:26:32.576 subtype: nvme subsystem 00:26:32.576 treq: not specified, sq flow control disable supported 00:26:32.576 portid: 1 00:26:32.576 trsvcid: 4420 00:26:32.576 subnqn: kernel_target 00:26:32.576 traddr: 10.0.0.1 00:26:32.576 eflags: none 00:26:32.576 sectype: none 00:26:32.576 10:19:03 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:26:32.576 10:19:03 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:32.576 10:19:03 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:32.576 10:19:03 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:26:32.576 10:19:03 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:32.576 10:19:03 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:26:32.576 10:19:03 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:32.576 10:19:03 -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:32.576 10:19:03 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:32.576 10:19:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:32.576 10:19:03 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:32.576 10:19:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:32.576 10:19:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:32.576 10:19:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:32.576 10:19:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:26:32.576 10:19:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:32.576 10:19:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:26:32.576 10:19:03 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:32.576 10:19:03 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:32.576 10:19:03 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:32.576 10:19:03 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:35.898 Initializing NVMe Controllers 00:26:35.898 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:35.898 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:35.898 Initialization complete. Launching workers. 00:26:35.898 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 31610, failed: 0 00:26:35.898 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 31610, failed to submit 0 00:26:35.898 success 0, unsuccess 31610, failed 0 00:26:35.898 10:19:06 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:35.898 10:19:06 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:39.179 Initializing NVMe Controllers 00:26:39.179 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:39.179 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:39.179 Initialization complete. Launching workers. 00:26:39.179 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 68832, failed: 0 00:26:39.179 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 29479, failed to submit 39353 00:26:39.179 success 0, unsuccess 29479, failed 0 00:26:39.179 10:19:09 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:39.179 10:19:09 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:26:42.466 Initializing NVMe Controllers 00:26:42.466 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:26:42.466 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:26:42.466 Initialization complete. Launching workers. 00:26:42.466 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 80099, failed: 0 00:26:42.466 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 20006, failed to submit 60093 00:26:42.466 success 0, unsuccess 20006, failed 0 00:26:42.466 10:19:12 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:26:42.466 10:19:12 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:26:42.466 10:19:12 -- nvmf/common.sh@677 -- # echo 0 00:26:42.466 10:19:12 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:26:42.466 10:19:12 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:26:42.466 10:19:12 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:42.466 10:19:12 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:26:42.466 10:19:12 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:26:42.466 10:19:12 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:26:42.466 00:26:42.466 real 0m10.519s 00:26:42.466 user 0m5.615s 00:26:42.466 sys 0m2.298s 00:26:42.466 10:19:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:42.466 10:19:12 -- common/autotest_common.sh@10 -- # set +x 00:26:42.466 ************************************ 00:26:42.466 END TEST kernel_target_abort 00:26:42.466 ************************************ 00:26:42.466 10:19:12 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:26:42.466 10:19:12 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:26:42.466 10:19:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:26:42.466 10:19:12 -- nvmf/common.sh@116 -- # sync 00:26:42.466 10:19:12 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:26:42.466 10:19:12 -- nvmf/common.sh@119 -- # set +e 00:26:42.466 10:19:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:26:42.466 10:19:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:26:42.466 rmmod nvme_tcp 00:26:42.466 rmmod nvme_fabrics 00:26:42.466 rmmod nvme_keyring 00:26:42.466 10:19:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:26:42.466 10:19:12 -- nvmf/common.sh@123 -- # set -e 00:26:42.466 10:19:12 -- nvmf/common.sh@124 -- # return 0 00:26:42.466 10:19:12 -- nvmf/common.sh@477 -- # '[' -n 102650 ']' 00:26:42.466 10:19:12 -- nvmf/common.sh@478 -- # killprocess 102650 00:26:42.466 10:19:12 -- common/autotest_common.sh@926 -- # '[' -z 102650 ']' 00:26:42.466 10:19:12 -- common/autotest_common.sh@930 -- # kill -0 102650 00:26:42.466 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (102650) - No such process 00:26:42.466 Process with pid 102650 is not found 00:26:42.466 10:19:12 -- common/autotest_common.sh@953 -- # echo 'Process with pid 102650 is not found' 00:26:42.466 10:19:12 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:26:42.466 10:19:12 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:26:43.034 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:43.034 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:26:43.034 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:26:43.034 10:19:13 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:26:43.034 10:19:13 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:26:43.034 10:19:13 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:43.034 10:19:13 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:26:43.034 10:19:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:43.034 10:19:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:43.034 10:19:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:43.034 10:19:13 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:26:43.034 00:26:43.034 real 0m24.393s 00:26:43.034 user 0m50.194s 00:26:43.034 sys 0m5.234s 00:26:43.034 10:19:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:43.034 10:19:13 -- common/autotest_common.sh@10 -- # set +x 00:26:43.034 ************************************ 00:26:43.034 END TEST nvmf_abort_qd_sizes 00:26:43.034 ************************************ 00:26:43.034 10:19:13 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:26:43.034 10:19:13 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:26:43.034 10:19:13 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:26:43.034 10:19:13 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:26:43.034 10:19:13 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:26:43.034 10:19:13 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:26:43.034 10:19:13 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:26:43.034 10:19:13 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:26:43.034 10:19:13 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:26:43.034 10:19:13 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:26:43.034 10:19:13 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:26:43.034 10:19:13 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:26:43.034 10:19:13 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:26:43.034 10:19:13 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:26:43.034 10:19:13 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:26:43.034 10:19:13 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:26:43.034 10:19:13 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:26:43.034 10:19:13 -- common/autotest_common.sh@712 -- # xtrace_disable 00:26:43.034 10:19:13 -- common/autotest_common.sh@10 -- # set +x 00:26:43.034 10:19:13 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:26:43.034 10:19:13 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:26:43.034 10:19:13 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:26:43.034 10:19:13 -- common/autotest_common.sh@10 -- # set +x 00:26:44.949 INFO: APP EXITING 00:26:44.949 INFO: killing all VMs 00:26:44.949 INFO: killing vhost app 00:26:44.949 INFO: EXIT DONE 00:26:45.517 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:45.517 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:26:45.517 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:26:46.174 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:26:46.174 Cleaning 00:26:46.174 Removing: /var/run/dpdk/spdk0/config 00:26:46.174 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:26:46.174 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:26:46.174 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:26:46.174 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:26:46.174 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:26:46.174 Removing: /var/run/dpdk/spdk0/hugepage_info 00:26:46.174 Removing: /var/run/dpdk/spdk1/config 00:26:46.174 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:26:46.174 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:26:46.174 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:26:46.174 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:26:46.174 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:26:46.174 Removing: /var/run/dpdk/spdk1/hugepage_info 00:26:46.174 Removing: /var/run/dpdk/spdk2/config 00:26:46.174 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:26:46.174 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:26:46.174 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:26:46.174 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:26:46.174 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:26:46.174 Removing: /var/run/dpdk/spdk2/hugepage_info 00:26:46.174 Removing: /var/run/dpdk/spdk3/config 00:26:46.174 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:26:46.174 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:26:46.174 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:26:46.174 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:26:46.174 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:26:46.174 Removing: /var/run/dpdk/spdk3/hugepage_info 00:26:46.174 Removing: /var/run/dpdk/spdk4/config 00:26:46.174 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:26:46.174 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:26:46.174 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:26:46.174 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:26:46.174 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:26:46.174 Removing: /var/run/dpdk/spdk4/hugepage_info 00:26:46.174 Removing: /dev/shm/nvmf_trace.0 00:26:46.174 Removing: /dev/shm/spdk_tgt_trace.pid67512 00:26:46.174 Removing: /var/run/dpdk/spdk0 00:26:46.174 Removing: /var/run/dpdk/spdk1 00:26:46.174 Removing: /var/run/dpdk/spdk2 00:26:46.174 Removing: /var/run/dpdk/spdk3 00:26:46.174 Removing: /var/run/dpdk/spdk4 00:26:46.457 Removing: /var/run/dpdk/spdk_pid100159 00:26:46.457 Removing: /var/run/dpdk/spdk_pid100450 00:26:46.457 Removing: /var/run/dpdk/spdk_pid100999 00:26:46.457 Removing: /var/run/dpdk/spdk_pid101010 00:26:46.457 Removing: /var/run/dpdk/spdk_pid101367 00:26:46.457 Removing: /var/run/dpdk/spdk_pid101526 00:26:46.457 Removing: /var/run/dpdk/spdk_pid101683 00:26:46.457 Removing: /var/run/dpdk/spdk_pid101780 00:26:46.457 Removing: /var/run/dpdk/spdk_pid101940 00:26:46.457 Removing: /var/run/dpdk/spdk_pid102049 00:26:46.457 Removing: /var/run/dpdk/spdk_pid102718 00:26:46.457 Removing: /var/run/dpdk/spdk_pid102756 00:26:46.457 Removing: /var/run/dpdk/spdk_pid102791 00:26:46.457 Removing: /var/run/dpdk/spdk_pid103038 00:26:46.457 Removing: /var/run/dpdk/spdk_pid103071 00:26:46.457 Removing: /var/run/dpdk/spdk_pid103105 00:26:46.457 Removing: /var/run/dpdk/spdk_pid67368 00:26:46.457 Removing: /var/run/dpdk/spdk_pid67512 00:26:46.457 Removing: /var/run/dpdk/spdk_pid67812 00:26:46.457 Removing: /var/run/dpdk/spdk_pid68087 00:26:46.457 Removing: /var/run/dpdk/spdk_pid68251 00:26:46.457 Removing: /var/run/dpdk/spdk_pid68332 00:26:46.457 Removing: /var/run/dpdk/spdk_pid68423 00:26:46.457 Removing: /var/run/dpdk/spdk_pid68506 00:26:46.457 Removing: /var/run/dpdk/spdk_pid68550 00:26:46.457 Removing: /var/run/dpdk/spdk_pid68580 00:26:46.457 Removing: /var/run/dpdk/spdk_pid68635 00:26:46.457 Removing: /var/run/dpdk/spdk_pid68758 00:26:46.457 Removing: /var/run/dpdk/spdk_pid69382 00:26:46.457 Removing: /var/run/dpdk/spdk_pid69446 00:26:46.457 Removing: /var/run/dpdk/spdk_pid69515 00:26:46.457 Removing: /var/run/dpdk/spdk_pid69543 00:26:46.457 Removing: /var/run/dpdk/spdk_pid69624 00:26:46.457 Removing: /var/run/dpdk/spdk_pid69652 00:26:46.457 Removing: /var/run/dpdk/spdk_pid69750 00:26:46.457 Removing: /var/run/dpdk/spdk_pid69778 00:26:46.457 Removing: /var/run/dpdk/spdk_pid69824 00:26:46.457 Removing: /var/run/dpdk/spdk_pid69854 00:26:46.457 Removing: /var/run/dpdk/spdk_pid69911 00:26:46.457 Removing: /var/run/dpdk/spdk_pid69941 00:26:46.457 Removing: /var/run/dpdk/spdk_pid70088 00:26:46.457 Removing: /var/run/dpdk/spdk_pid70123 00:26:46.457 Removing: /var/run/dpdk/spdk_pid70191 00:26:46.457 Removing: /var/run/dpdk/spdk_pid70268 00:26:46.457 Removing: /var/run/dpdk/spdk_pid70297 00:26:46.457 Removing: /var/run/dpdk/spdk_pid70351 00:26:46.457 Removing: /var/run/dpdk/spdk_pid70376 00:26:46.457 Removing: /var/run/dpdk/spdk_pid70411 00:26:46.457 Removing: /var/run/dpdk/spdk_pid70430 00:26:46.457 Removing: /var/run/dpdk/spdk_pid70465 00:26:46.457 Removing: /var/run/dpdk/spdk_pid70484 00:26:46.457 Removing: /var/run/dpdk/spdk_pid70519 00:26:46.457 Removing: /var/run/dpdk/spdk_pid70538 00:26:46.457 Removing: /var/run/dpdk/spdk_pid70573 00:26:46.457 Removing: /var/run/dpdk/spdk_pid70592 00:26:46.457 Removing: /var/run/dpdk/spdk_pid70627 00:26:46.457 Removing: /var/run/dpdk/spdk_pid70646 00:26:46.457 Removing: /var/run/dpdk/spdk_pid70683 00:26:46.457 Removing: /var/run/dpdk/spdk_pid70702 00:26:46.457 Removing: /var/run/dpdk/spdk_pid70737 00:26:46.457 Removing: /var/run/dpdk/spdk_pid70762 00:26:46.457 Removing: /var/run/dpdk/spdk_pid70791 00:26:46.457 Removing: /var/run/dpdk/spdk_pid70805 00:26:46.457 Removing: /var/run/dpdk/spdk_pid70845 00:26:46.457 Removing: /var/run/dpdk/spdk_pid70859 00:26:46.457 Removing: /var/run/dpdk/spdk_pid70899 00:26:46.457 Removing: /var/run/dpdk/spdk_pid70913 00:26:46.457 Removing: /var/run/dpdk/spdk_pid70948 00:26:46.457 Removing: /var/run/dpdk/spdk_pid70967 00:26:46.457 Removing: /var/run/dpdk/spdk_pid71001 00:26:46.457 Removing: /var/run/dpdk/spdk_pid71021 00:26:46.457 Removing: /var/run/dpdk/spdk_pid71050 00:26:46.457 Removing: /var/run/dpdk/spdk_pid71070 00:26:46.457 Removing: /var/run/dpdk/spdk_pid71104 00:26:46.457 Removing: /var/run/dpdk/spdk_pid71124 00:26:46.457 Removing: /var/run/dpdk/spdk_pid71157 00:26:46.457 Removing: /var/run/dpdk/spdk_pid71172 00:26:46.457 Removing: /var/run/dpdk/spdk_pid71207 00:26:46.457 Removing: /var/run/dpdk/spdk_pid71229 00:26:46.457 Removing: /var/run/dpdk/spdk_pid71267 00:26:46.457 Removing: /var/run/dpdk/spdk_pid71289 00:26:46.457 Removing: /var/run/dpdk/spdk_pid71327 00:26:46.457 Removing: /var/run/dpdk/spdk_pid71341 00:26:46.457 Removing: /var/run/dpdk/spdk_pid71381 00:26:46.457 Removing: /var/run/dpdk/spdk_pid71395 00:26:46.457 Removing: /var/run/dpdk/spdk_pid71436 00:26:46.457 Removing: /var/run/dpdk/spdk_pid71494 00:26:46.457 Removing: /var/run/dpdk/spdk_pid71604 00:26:46.457 Removing: /var/run/dpdk/spdk_pid72010 00:26:46.457 Removing: /var/run/dpdk/spdk_pid78730 00:26:46.457 Removing: /var/run/dpdk/spdk_pid79071 00:26:46.457 Removing: /var/run/dpdk/spdk_pid81455 00:26:46.457 Removing: /var/run/dpdk/spdk_pid81835 00:26:46.457 Removing: /var/run/dpdk/spdk_pid82077 00:26:46.457 Removing: /var/run/dpdk/spdk_pid82123 00:26:46.457 Removing: /var/run/dpdk/spdk_pid82430 00:26:46.457 Removing: /var/run/dpdk/spdk_pid82480 00:26:46.457 Removing: /var/run/dpdk/spdk_pid82849 00:26:46.457 Removing: /var/run/dpdk/spdk_pid83362 00:26:46.457 Removing: /var/run/dpdk/spdk_pid83787 00:26:46.457 Removing: /var/run/dpdk/spdk_pid84743 00:26:46.715 Removing: /var/run/dpdk/spdk_pid85710 00:26:46.715 Removing: /var/run/dpdk/spdk_pid85822 00:26:46.715 Removing: /var/run/dpdk/spdk_pid85890 00:26:46.715 Removing: /var/run/dpdk/spdk_pid87341 00:26:46.715 Removing: /var/run/dpdk/spdk_pid87573 00:26:46.716 Removing: /var/run/dpdk/spdk_pid88007 00:26:46.716 Removing: /var/run/dpdk/spdk_pid88117 00:26:46.716 Removing: /var/run/dpdk/spdk_pid88265 00:26:46.716 Removing: /var/run/dpdk/spdk_pid88316 00:26:46.716 Removing: /var/run/dpdk/spdk_pid88356 00:26:46.716 Removing: /var/run/dpdk/spdk_pid88396 00:26:46.716 Removing: /var/run/dpdk/spdk_pid88558 00:26:46.716 Removing: /var/run/dpdk/spdk_pid88712 00:26:46.716 Removing: /var/run/dpdk/spdk_pid88965 00:26:46.716 Removing: /var/run/dpdk/spdk_pid89082 00:26:46.716 Removing: /var/run/dpdk/spdk_pid89503 00:26:46.716 Removing: /var/run/dpdk/spdk_pid89874 00:26:46.716 Removing: /var/run/dpdk/spdk_pid89886 00:26:46.716 Removing: /var/run/dpdk/spdk_pid92109 00:26:46.716 Removing: /var/run/dpdk/spdk_pid92414 00:26:46.716 Removing: /var/run/dpdk/spdk_pid92905 00:26:46.716 Removing: /var/run/dpdk/spdk_pid92908 00:26:46.716 Removing: /var/run/dpdk/spdk_pid93237 00:26:46.716 Removing: /var/run/dpdk/spdk_pid93257 00:26:46.716 Removing: /var/run/dpdk/spdk_pid93275 00:26:46.716 Removing: /var/run/dpdk/spdk_pid93310 00:26:46.716 Removing: /var/run/dpdk/spdk_pid93315 00:26:46.716 Removing: /var/run/dpdk/spdk_pid93454 00:26:46.716 Removing: /var/run/dpdk/spdk_pid93461 00:26:46.716 Removing: /var/run/dpdk/spdk_pid93564 00:26:46.716 Removing: /var/run/dpdk/spdk_pid93576 00:26:46.716 Removing: /var/run/dpdk/spdk_pid93680 00:26:46.716 Removing: /var/run/dpdk/spdk_pid93686 00:26:46.716 Removing: /var/run/dpdk/spdk_pid94146 00:26:46.716 Removing: /var/run/dpdk/spdk_pid94189 00:26:46.716 Removing: /var/run/dpdk/spdk_pid94340 00:26:46.716 Removing: /var/run/dpdk/spdk_pid94461 00:26:46.716 Removing: /var/run/dpdk/spdk_pid94847 00:26:46.716 Removing: /var/run/dpdk/spdk_pid95098 00:26:46.716 Removing: /var/run/dpdk/spdk_pid95577 00:26:46.716 Removing: /var/run/dpdk/spdk_pid96142 00:26:46.716 Removing: /var/run/dpdk/spdk_pid96601 00:26:46.716 Removing: /var/run/dpdk/spdk_pid96687 00:26:46.716 Removing: /var/run/dpdk/spdk_pid96776 00:26:46.716 Removing: /var/run/dpdk/spdk_pid96862 00:26:46.716 Removing: /var/run/dpdk/spdk_pid97025 00:26:46.716 Removing: /var/run/dpdk/spdk_pid97110 00:26:46.716 Removing: /var/run/dpdk/spdk_pid97199 00:26:46.716 Removing: /var/run/dpdk/spdk_pid97285 00:26:46.716 Removing: /var/run/dpdk/spdk_pid97629 00:26:46.716 Removing: /var/run/dpdk/spdk_pid98318 00:26:46.716 Removing: /var/run/dpdk/spdk_pid99666 00:26:46.716 Removing: /var/run/dpdk/spdk_pid99868 00:26:46.716 Clean 00:26:46.716 killing process with pid 61729 00:26:46.716 killing process with pid 61735 00:26:46.973 10:19:17 -- common/autotest_common.sh@1436 -- # return 0 00:26:46.973 10:19:17 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:26:46.973 10:19:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:46.973 10:19:17 -- common/autotest_common.sh@10 -- # set +x 00:26:46.973 10:19:17 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:26:46.973 10:19:17 -- common/autotest_common.sh@718 -- # xtrace_disable 00:26:46.973 10:19:17 -- common/autotest_common.sh@10 -- # set +x 00:26:46.973 10:19:17 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:26:46.973 10:19:17 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:26:46.973 10:19:17 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:26:46.973 10:19:17 -- spdk/autotest.sh@394 -- # hash lcov 00:26:46.973 10:19:17 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:26:46.973 10:19:17 -- spdk/autotest.sh@396 -- # hostname 00:26:46.973 10:19:17 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:26:47.231 geninfo: WARNING: invalid characters removed from testname! 00:27:09.154 10:19:38 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:11.684 10:19:41 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:14.214 10:19:44 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:16.119 10:19:46 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:18.656 10:19:48 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:21.220 10:19:51 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:27:23.750 10:19:53 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:27:23.750 10:19:54 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:23.750 10:19:54 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:27:23.750 10:19:54 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:23.750 10:19:54 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:23.750 10:19:54 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.750 10:19:54 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.750 10:19:54 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.750 10:19:54 -- paths/export.sh@5 -- $ export PATH 00:27:23.750 10:19:54 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:23.750 10:19:54 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:27:23.750 10:19:54 -- common/autobuild_common.sh@435 -- $ date +%s 00:27:23.750 10:19:54 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720952394.XXXXXX 00:27:23.750 10:19:54 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720952394.x3lbdB 00:27:23.750 10:19:54 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:27:23.750 10:19:54 -- common/autobuild_common.sh@441 -- $ '[' -n v23.11 ']' 00:27:23.750 10:19:54 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:27:23.750 10:19:54 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:27:23.750 10:19:54 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:27:23.750 10:19:54 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:27:23.750 10:19:54 -- common/autobuild_common.sh@451 -- $ get_config_params 00:27:23.750 10:19:54 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:27:23.750 10:19:54 -- common/autotest_common.sh@10 -- $ set +x 00:27:23.750 10:19:54 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-avahi --with-golang' 00:27:23.750 10:19:54 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:27:23.750 10:19:54 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:27:23.750 10:19:54 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:27:23.750 10:19:54 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:27:23.750 10:19:54 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:27:23.750 10:19:54 -- spdk/autopackage.sh@19 -- $ timing_finish 00:27:23.750 10:19:54 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:27:23.750 10:19:54 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:27:23.750 10:19:54 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:27:23.750 10:19:54 -- spdk/autopackage.sh@20 -- $ exit 0 00:27:23.750 + [[ -n 5877 ]] 00:27:23.750 + sudo kill 5877 00:27:23.759 [Pipeline] } 00:27:23.780 [Pipeline] // timeout 00:27:23.787 [Pipeline] } 00:27:23.806 [Pipeline] // stage 00:27:23.813 [Pipeline] } 00:27:23.831 [Pipeline] // catchError 00:27:23.839 [Pipeline] stage 00:27:23.841 [Pipeline] { (Stop VM) 00:27:23.854 [Pipeline] sh 00:27:24.132 + vagrant halt 00:27:27.421 ==> default: Halting domain... 00:27:33.989 [Pipeline] sh 00:27:34.262 + vagrant destroy -f 00:27:37.559 ==> default: Removing domain... 00:27:37.567 [Pipeline] sh 00:27:37.841 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:27:37.850 [Pipeline] } 00:27:37.918 [Pipeline] // stage 00:27:37.923 [Pipeline] } 00:27:37.934 [Pipeline] // dir 00:27:37.939 [Pipeline] } 00:27:37.950 [Pipeline] // wrap 00:27:37.955 [Pipeline] } 00:27:37.964 [Pipeline] // catchError 00:27:37.971 [Pipeline] stage 00:27:37.972 [Pipeline] { (Epilogue) 00:27:37.983 [Pipeline] sh 00:27:38.299 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:27:43.578 [Pipeline] catchError 00:27:43.580 [Pipeline] { 00:27:43.593 [Pipeline] sh 00:27:43.873 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:27:44.132 Artifacts sizes are good 00:27:44.144 [Pipeline] } 00:27:44.166 [Pipeline] // catchError 00:27:44.181 [Pipeline] archiveArtifacts 00:27:44.190 Archiving artifacts 00:27:44.357 [Pipeline] cleanWs 00:27:44.372 [WS-CLEANUP] Deleting project workspace... 00:27:44.372 [WS-CLEANUP] Deferred wipeout is used... 00:27:44.378 [WS-CLEANUP] done 00:27:44.381 [Pipeline] } 00:27:44.403 [Pipeline] // stage 00:27:44.408 [Pipeline] } 00:27:44.425 [Pipeline] // node 00:27:44.431 [Pipeline] End of Pipeline 00:27:44.470 Finished: SUCCESS